首页 > 最新文献

Journal of Medical Imaging最新文献

英文 中文
Multiscale attention network with structure guidance for colorectal polyp segmentation. 基于结构导向的多尺度关注网络结肠直肠息肉分割。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-12-04 DOI: 10.1117/1.JMI.12.6.064004
Yang Yang, Jie Gao, Lanling Zeng, Xinsheng Wang, Xinyu Wang

Purpose: Accurate segmentation and precise delineation of colorectal polyp structures are crucial for early clinical diagnosis and treatment planning. However, existing polyp segmentation techniques face significant challenges due to the high variability in polyp size and morphology, as well as the frequent indistinctness of polyp-tissue structures.

Approach: To address these challenges, we propose a multiscale attention network with structure guidance (MAN-SG). The core of MAN-SG is a structure extraction module (SEM) designed to capture rich structural information from fine-grained early-stage encoder features. In addition, we introduce a cross-scale structure guided attention (CSGA) module that effectively fuses multiscale features under the guidance of the structural information provided by the SEM, thereby enabling more accurate delineation of polyp structures. MAN-SG is implemented and evaluated using two high-performance backbone networks: Res2Net-50 and PVTv2-B2.

Results: Extensive experiments were conducted on five benchmark datasets for polyp segmentation. The results demonstrate that MAN-SG consistently outperforms existing state-of-the-art methods across these datasets.

Conclusion: The proposed MAN-SG framework, which leverages structural guidance via SEM and CSGA modules, proves to be both highly effective and robust for the challenging task of colorectal polyp segmentation.

目的:结直肠息肉结构的准确分割和准确描绘对临床早期诊断和治疗方案的制定至关重要。然而,由于息肉大小和形态的高度可变性以及息肉组织结构的频繁不一致性,现有的息肉分割技术面临着重大挑战。方法:为了解决这些挑战,我们提出了一种带有结构引导的多尺度注意力网络(MAN-SG)。MAN-SG的核心是一个结构提取模块(SEM),旨在从细粒度的早期编码器特征中捕获丰富的结构信息。此外,我们引入了跨尺度结构引导注意(CSGA)模块,该模块在SEM提供的结构信息的指导下有效融合多尺度特征,从而更准确地描绘息肉结构。MAN-SG通过Res2Net-50和PVTv2-B2两个高性能骨干网实现和评估。结果:在5个基准数据集上进行了大量的息肉分割实验。结果表明,在这些数据集上,MAN-SG始终优于现有的最先进的方法。结论:本文提出的MAN-SG框架利用SEM和CSGA模块的结构引导,对于具有挑战性的结直肠息肉分割任务是高效且稳健的。
{"title":"Multiscale attention network with structure guidance for colorectal polyp segmentation.","authors":"Yang Yang, Jie Gao, Lanling Zeng, Xinsheng Wang, Xinyu Wang","doi":"10.1117/1.JMI.12.6.064004","DOIUrl":"https://doi.org/10.1117/1.JMI.12.6.064004","url":null,"abstract":"<p><strong>Purpose: </strong>Accurate segmentation and precise delineation of colorectal polyp structures are crucial for early clinical diagnosis and treatment planning. However, existing polyp segmentation techniques face significant challenges due to the high variability in polyp size and morphology, as well as the frequent indistinctness of polyp-tissue structures.</p><p><strong>Approach: </strong>To address these challenges, we propose a multiscale attention network with structure guidance (MAN-SG). The core of MAN-SG is a structure extraction module (SEM) designed to capture rich structural information from fine-grained early-stage encoder features. In addition, we introduce a cross-scale structure guided attention (CSGA) module that effectively fuses multiscale features under the guidance of the structural information provided by the SEM, thereby enabling more accurate delineation of polyp structures. MAN-SG is implemented and evaluated using two high-performance backbone networks: Res2Net-50 and PVTv2-B2.</p><p><strong>Results: </strong>Extensive experiments were conducted on five benchmark datasets for polyp segmentation. The results demonstrate that MAN-SG consistently outperforms existing state-of-the-art methods across these datasets.</p><p><strong>Conclusion: </strong>The proposed MAN-SG framework, which leverages structural guidance via SEM and CSGA modules, proves to be both highly effective and robust for the challenging task of colorectal polyp segmentation.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"064004"},"PeriodicalIF":1.7,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12674953/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145679134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Img2ST-Net: efficient high-resolution spatial omics prediction from whole-slide histology images via fully convolutional image-to-image learning. Img2ST-Net:通过全卷积图像到图像学习,从整个切片组织学图像中进行高效的高分辨率空间组学预测。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-11-07 DOI: 10.1117/1.JMI.12.6.061410
Junchao Zhu, Ruining Deng, Junlin Guo, Tianyuan Yao, Juming Xiong, Chongyu Qu, Mengmeng Yin, Yu Wang, Shilin Zhao, Haichun Yang, Daguang Xu, Yucheng Tang, Yuankai Huo
<p><strong>Purpose: </strong>Recent advances in multimodal artificial intelligence (AI) have demonstrated promising potential for generating the currently expensive spatial transcriptomics (ST) data directly from routine histology images, offering a means to reduce the high cost and time-intensive nature of ST data acquisition. However, the increasing resolution of ST-particularly with platforms such as Visium HD achieving <math><mrow><mn>8</mn> <mtext>  </mtext> <mi>μ</mi> <mi>m</mi></mrow> </math> or finer-introduces significant computational and modeling challenges. Conventional spot-by-spot sequential regression frameworks become inefficient and unstable at this scale, whereas the inherent extreme sparsity and low expression levels of high-resolution ST further complicate both prediction and evaluation.</p><p><strong>Approach: </strong>To address these limitations, we propose Img2ST-Net, a high-definition (HD) histology-to-ST generation framework for efficient and parallel high-resolution ST prediction. Unlike conventional spot-by-spot inference methods, Img2ST-Net employs a fully convolutional architecture to generate dense, HD gene expression maps in a parallelized manner. By modeling HD ST data as super-pixel representations, the task is reformulated from image-to-omics inference into a super-content image generation problem with hundreds or thousands of output channels. This design not only improves computational efficiency but also better preserves the spatial organization intrinsic to spatial omics data. To enhance robustness under sparse expression patterns, we further introduce SSIM-ST, a structural-similarity-based evaluation metric tailored for high-resolution ST analysis.</p><p><strong>Results: </strong>Evaluations on two public Visium HD datasets at 8 and <math><mrow><mn>16</mn> <mtext>  </mtext> <mi>μ</mi> <mi>m</mi></mrow> </math> resolutions demonstrate that Img2ST-Net outperforms state-of-the-art methods in both accuracy and spatial coherence. On the Breast Cancer dataset at <math><mrow><mn>16</mn> <mtext>  </mtext> <mi>μ</mi> <mi>m</mi></mrow> </math> , Img2ST-Net achieves a mean squared error (MSE) of 0.1657 and a structural similarity index of 0.0937, whereas on the Colorectal Cancer dataset, it reaches an MSE of 0.7981 and a mean absolute error of 0.5208. These results highlight its ability to capture fine-grained gene expression patterns. In addition, our region-wise modeling significantly reduces training time without sacrificing performance, achieving up to 28-fold acceleration over conventional spot-wise methods. Ablation studies further validate the contribution of contrastive learning in enhancing spatial fidelity. The source code has been made publicly available at https://github.com/hrlblab/Img2ST-Net.</p><p><strong>Conclusions: </strong>We present a scalable, biologically coherent framework for high-resolution ST prediction. Img2ST-Net offers a principled solution for efficient and accurate ST inference at scale.
目的:多模式人工智能(AI)的最新进展表明,它有潜力直接从常规组织学图像中生成目前昂贵的空间转录组学(ST)数据,为降低ST数据采集的高成本和时间密集型提供了一种手段。然而,st分辨率的不断提高,特别是在Visium HD等平台达到8 μ m或更细的情况下,带来了重大的计算和建模挑战。传统的逐点序列回归框架在这种尺度下变得低效和不稳定,而高分辨率温度固有的极端稀疏性和低表达水平进一步使预测和评估复杂化。方法:为了解决这些限制,我们提出了Img2ST-Net,这是一个用于高效并行高分辨率ST预测的高清(HD)组织学到ST生成框架。与传统的逐点推理方法不同,Img2ST-Net采用全卷积架构,以并行方式生成密集的HD基因表达图谱。通过将HD ST数据建模为超像素表示,将任务从图像到组学的推理重新定义为具有数百或数千个输出通道的超级内容图像生成问题。这种设计不仅提高了计算效率,而且更好地保留了空间组学数据固有的空间组织。为了增强稀疏表达模式下的鲁棒性,我们进一步引入了SSIM-ST,这是一种为高分辨率ST分析量身定制的基于结构相似性的评估指标。结果:对8 μ m和16 μ m分辨率的两个公共Visium HD数据集的评估表明,Img2ST-Net在精度和空间相干性方面都优于最先进的方法。在16 μ m的乳腺癌数据集上,Img2ST-Net的均方误差(MSE)为0.1657,结构相似性指数为0.0937,而在结直肠癌数据集上,MSE为0.7981,平均绝对误差为0.5208。这些结果突出了它捕捉细粒度基因表达模式的能力。此外,我们的区域智能建模在不牺牲性能的情况下显著减少了训练时间,比传统的点智能方法实现了高达28倍的加速。消融研究进一步验证了对比学习在提高空间保真度方面的贡献。源代码已在https://github.com/hrlblab/Img2ST-Net.Conclusions上公开提供:我们提出了一个可扩展的、生物学上一致的高分辨率ST预测框架。Img2ST-Net为大规模高效准确的ST推断提供了原则性解决方案。我们的贡献为下一代具有鲁棒性和分辨率感知的ST建模奠定了基础。
{"title":"Img2ST-Net: efficient high-resolution spatial omics prediction from whole-slide histology images via fully convolutional image-to-image learning.","authors":"Junchao Zhu, Ruining Deng, Junlin Guo, Tianyuan Yao, Juming Xiong, Chongyu Qu, Mengmeng Yin, Yu Wang, Shilin Zhao, Haichun Yang, Daguang Xu, Yucheng Tang, Yuankai Huo","doi":"10.1117/1.JMI.12.6.061410","DOIUrl":"10.1117/1.JMI.12.6.061410","url":null,"abstract":"&lt;p&gt;&lt;strong&gt;Purpose: &lt;/strong&gt;Recent advances in multimodal artificial intelligence (AI) have demonstrated promising potential for generating the currently expensive spatial transcriptomics (ST) data directly from routine histology images, offering a means to reduce the high cost and time-intensive nature of ST data acquisition. However, the increasing resolution of ST-particularly with platforms such as Visium HD achieving &lt;math&gt;&lt;mrow&gt;&lt;mn&gt;8&lt;/mn&gt; &lt;mtext&gt;  &lt;/mtext&gt; &lt;mi&gt;μ&lt;/mi&gt; &lt;mi&gt;m&lt;/mi&gt;&lt;/mrow&gt; &lt;/math&gt; or finer-introduces significant computational and modeling challenges. Conventional spot-by-spot sequential regression frameworks become inefficient and unstable at this scale, whereas the inherent extreme sparsity and low expression levels of high-resolution ST further complicate both prediction and evaluation.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Approach: &lt;/strong&gt;To address these limitations, we propose Img2ST-Net, a high-definition (HD) histology-to-ST generation framework for efficient and parallel high-resolution ST prediction. Unlike conventional spot-by-spot inference methods, Img2ST-Net employs a fully convolutional architecture to generate dense, HD gene expression maps in a parallelized manner. By modeling HD ST data as super-pixel representations, the task is reformulated from image-to-omics inference into a super-content image generation problem with hundreds or thousands of output channels. This design not only improves computational efficiency but also better preserves the spatial organization intrinsic to spatial omics data. To enhance robustness under sparse expression patterns, we further introduce SSIM-ST, a structural-similarity-based evaluation metric tailored for high-resolution ST analysis.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Results: &lt;/strong&gt;Evaluations on two public Visium HD datasets at 8 and &lt;math&gt;&lt;mrow&gt;&lt;mn&gt;16&lt;/mn&gt; &lt;mtext&gt;  &lt;/mtext&gt; &lt;mi&gt;μ&lt;/mi&gt; &lt;mi&gt;m&lt;/mi&gt;&lt;/mrow&gt; &lt;/math&gt; resolutions demonstrate that Img2ST-Net outperforms state-of-the-art methods in both accuracy and spatial coherence. On the Breast Cancer dataset at &lt;math&gt;&lt;mrow&gt;&lt;mn&gt;16&lt;/mn&gt; &lt;mtext&gt;  &lt;/mtext&gt; &lt;mi&gt;μ&lt;/mi&gt; &lt;mi&gt;m&lt;/mi&gt;&lt;/mrow&gt; &lt;/math&gt; , Img2ST-Net achieves a mean squared error (MSE) of 0.1657 and a structural similarity index of 0.0937, whereas on the Colorectal Cancer dataset, it reaches an MSE of 0.7981 and a mean absolute error of 0.5208. These results highlight its ability to capture fine-grained gene expression patterns. In addition, our region-wise modeling significantly reduces training time without sacrificing performance, achieving up to 28-fold acceleration over conventional spot-wise methods. Ablation studies further validate the contribution of contrastive learning in enhancing spatial fidelity. The source code has been made publicly available at https://github.com/hrlblab/Img2ST-Net.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Conclusions: &lt;/strong&gt;We present a scalable, biologically coherent framework for high-resolution ST prediction. Img2ST-Net offers a principled solution for efficient and accurate ST inference at scale. ","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"061410"},"PeriodicalIF":1.7,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12594103/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145483338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantification-based explainable artificial intelligence for deep learning decisions: clustering and visualization of quantitative morphometric features in hepatocellular carcinoma discrimination. 基于定量的深度学习决策的可解释人工智能:肝细胞癌鉴别定量形态学特征的聚类和可视化。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-10-11 DOI: 10.1117/1.JMI.12.6.061407
Gen Takagi, Saori Takeyama, Tokiya Abe, Akinori Hashiguchi, Michiie Sakamoto, Kenji Suzuki, Masahiro Yamaguchi

Purpose: Deep learning (DL) is rapidly advancing in computational pathology, offering high diagnostic accuracy but often functioning as a "black box" with limited interpretability. This lack of transparency hinders its clinical adoption, emphasizing the need for quantitative explainable artificial intelligence (QXAI) methods. We propose a QXAI approach to objectively and quantitatively elucidate the reasoning behind DL model decisions in hepatocellular carcinoma (HCC) pathological image analysis.

Approach: The proposed method utilizes clustering in the latent space of embeddings generated by a DL model to identify regions that contribute to the model's discrimination. Each cluster is then quantitatively characterized by morphometric features obtained through nuclear segmentation using HoverNet and key feature selection with LightGBM. Statistical analysis is performed to assess the importance of selected features, ensuring an interpretable relationship between morphological characteristics and classification outcomes. This approach enables the quantitative interpretation of which regions and features are critical for the model's decision-making, without sacrificing accuracy.

Results: Experiments on pathology images of hematoxylin-and-eosin-stained HCC tissue sections showed that the proposed method effectively identified key discriminatory regions and features, such as nuclear size, chromatin density, and shape irregularity. The clustering-based analysis provided structured insights into morphological patterns influencing classification, with explanations evaluated as clinically relevant and interpretable by a pathologist.

Conclusions: Our QXAI framework enhances the interpretability of DL-based pathology analysis by linking morphological features to classification decisions. This fosters trust in DL models and facilitates their clinical integration.

目的:深度学习(DL)在计算病理学中迅速发展,提供高诊断准确性,但通常作为“黑匣子”,具有有限的可解释性。缺乏透明度阻碍了其临床应用,强调需要定量可解释的人工智能(QXAI)方法。我们提出了一种QXAI方法来客观和定量地阐明肝细胞癌(HCC)病理图像分析中DL模型决策背后的原因。方法:该方法利用深度学习模型生成的嵌入的潜在空间中的聚类来识别有助于模型识别的区域。然后通过HoverNet核分割和LightGBM关键特征选择获得的形态特征对每个聚类进行定量表征。进行统计分析以评估所选特征的重要性,确保形态特征和分类结果之间的可解释关系。这种方法能够定量解释哪些区域和特征对模型的决策至关重要,而不牺牲准确性。结果:苏木精和伊红染色的HCC组织切片病理图像实验表明,该方法可以有效识别出关键的区分区域和特征,如细胞核大小、染色质密度和形状不规则性。基于聚类的分析为影响分类的形态模式提供了结构化的见解,病理学家对其解释进行了临床相关和可解释的评估。结论:我们的QXAI框架通过将形态学特征与分类决策联系起来,增强了基于dl的病理分析的可解释性。这促进了对DL模型的信任,并促进了它们的临床整合。
{"title":"Quantification-based explainable artificial intelligence for deep learning decisions: clustering and visualization of quantitative morphometric features in hepatocellular carcinoma discrimination.","authors":"Gen Takagi, Saori Takeyama, Tokiya Abe, Akinori Hashiguchi, Michiie Sakamoto, Kenji Suzuki, Masahiro Yamaguchi","doi":"10.1117/1.JMI.12.6.061407","DOIUrl":"https://doi.org/10.1117/1.JMI.12.6.061407","url":null,"abstract":"<p><strong>Purpose: </strong>Deep learning (DL) is rapidly advancing in computational pathology, offering high diagnostic accuracy but often functioning as a \"black box\" with limited interpretability. This lack of transparency hinders its clinical adoption, emphasizing the need for quantitative explainable artificial intelligence (QXAI) methods. We propose a QXAI approach to objectively and quantitatively elucidate the reasoning behind DL model decisions in hepatocellular carcinoma (HCC) pathological image analysis.</p><p><strong>Approach: </strong>The proposed method utilizes clustering in the latent space of embeddings generated by a DL model to identify regions that contribute to the model's discrimination. Each cluster is then quantitatively characterized by morphometric features obtained through nuclear segmentation using HoverNet and key feature selection with LightGBM. Statistical analysis is performed to assess the importance of selected features, ensuring an interpretable relationship between morphological characteristics and classification outcomes. This approach enables the quantitative interpretation of which regions and features are critical for the model's decision-making, without sacrificing accuracy.</p><p><strong>Results: </strong>Experiments on pathology images of hematoxylin-and-eosin-stained HCC tissue sections showed that the proposed method effectively identified key discriminatory regions and features, such as nuclear size, chromatin density, and shape irregularity. The clustering-based analysis provided structured insights into morphological patterns influencing classification, with explanations evaluated as clinically relevant and interpretable by a pathologist.</p><p><strong>Conclusions: </strong>Our QXAI framework enhances the interpretability of DL-based pathology analysis by linking morphological features to classification decisions. This fosters trust in DL models and facilitates their clinical integration.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"061407"},"PeriodicalIF":1.7,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12513858/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145281599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Workload reduction of digital breast tomosynthesis screening using artificial intelligence and synthetic mammography: a simulation study. 使用人工智能和合成乳房x线照相术减少数字乳房断层合成筛查的工作量:一项模拟研究。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-04-30 DOI: 10.1117/1.JMI.12.S2.S22005
Victor Dahlblom, Magnus Dustler, Sophia Zackrisson, Anders Tingberg

Purpose: To achieve the high sensitivity of digital breast tomosynthesis (DBT), a time-consuming reading is necessary. However, synthetic mammography (SM) images, equivalent to digital mammography (DM), can be generated from DBT images. SM is faster to read and might be sufficient in many cases. We investigate using artificial intelligence (AI) to stratify examinations into reading of either SM or DBT to minimize workload and maximize accuracy.

Approach: This is a retrospective study based on double-read paired DM and one-view DBT from the Malmö Breast Tomosynthesis Screening Trial. DBT examinations were analyzed with the cancer detection AI system ScreenPoint Transpara 1.7. For low-risk examinations, SM reading was simulated by assuming equality with DM reading. For high-risk examinations, the DBT reading results were used. Different combinations of single and double reading were studied.

Results: By double-reading the DBT of 30% (4452/14,772) of the cases with the highest risk, and single-reading SM for the rest, 122 cancers would be detected with the same reading workload as DM double reading. That is 28% (27/95) more cancers would be detected than with DM double reading, and in total, 96% (122/127) of the cancers detectable with full DBT double reading would be found.

Conclusions: In a DBT-based screening program, AI could be used to select high-risk cases where the reading of DBT is valuable, whereas SM is sufficient for low-risk cases. Substantially, more cancers could be detected compared with DM only, with only a limited increase in reading workload. Prospective studies are necessary.

目的:为了实现数字乳腺断层合成(DBT)的高灵敏度,需要进行耗时的读取。然而,合成乳房x线摄影(SM)图像,相当于数字乳房x线摄影(DM),可以从DBT图像生成。SM的阅读速度更快,在许多情况下可能就足够了。我们研究使用人工智能(AI)将考试分为SM阅读或DBT阅读,以最大限度地减少工作量和提高准确性。方法:这是一项基于Malmö乳腺断层合成筛查试验的双读配对DM和单视图DBT的回顾性研究。采用癌症检测AI系统ScreenPoint Transpara 1.7对DBT检查进行分析。对于低风险检查,SM读数通过假设与DM读数相等来模拟。对于高危检查,采用DBT读数结果。研究了单读和双读的不同组合。结果:通过双读30%(4452/ 14772)高危患者的DBT,单读SM,在与DM双读工作量相同的情况下检出122例肿瘤。也就是说,与DM双读数相比,多28%(27/95)的癌症被检测到,而DBT全双读数检测到的癌症中,有96%(122/127)被发现。结论:在基于DBT的筛查项目中,人工智能可用于选择DBT读数有价值的高风险病例,而SM用于低风险病例就足够了。实际上,与DM相比,可以检测到更多的癌症,而阅读工作量仅增加有限。前瞻性研究是必要的。
{"title":"Workload reduction of digital breast tomosynthesis screening using artificial intelligence and synthetic mammography: a simulation study.","authors":"Victor Dahlblom, Magnus Dustler, Sophia Zackrisson, Anders Tingberg","doi":"10.1117/1.JMI.12.S2.S22005","DOIUrl":"10.1117/1.JMI.12.S2.S22005","url":null,"abstract":"<p><strong>Purpose: </strong>To achieve the high sensitivity of digital breast tomosynthesis (DBT), a time-consuming reading is necessary. However, synthetic mammography (SM) images, equivalent to digital mammography (DM), can be generated from DBT images. SM is faster to read and might be sufficient in many cases. We investigate using artificial intelligence (AI) to stratify examinations into reading of either SM or DBT to minimize workload and maximize accuracy.</p><p><strong>Approach: </strong>This is a retrospective study based on double-read paired DM and one-view DBT from the Malmö Breast Tomosynthesis Screening Trial. DBT examinations were analyzed with the cancer detection AI system ScreenPoint Transpara 1.7. For low-risk examinations, SM reading was simulated by assuming equality with DM reading. For high-risk examinations, the DBT reading results were used. Different combinations of single and double reading were studied.</p><p><strong>Results: </strong>By double-reading the DBT of 30% (4452/14,772) of the cases with the highest risk, and single-reading SM for the rest, 122 cancers would be detected with the same reading workload as DM double reading. That is 28% (27/95) more cancers would be detected than with DM double reading, and in total, 96% (122/127) of the cancers detectable with full DBT double reading would be found.</p><p><strong>Conclusions: </strong>In a DBT-based screening program, AI could be used to select high-risk cases where the reading of DBT is valuable, whereas SM is sufficient for low-risk cases. Substantially, more cancers could be detected compared with DM only, with only a limited increase in reading workload. Prospective studies are necessary.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 2","pages":"S22005"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12042222/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144003543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Glo-In-One-v2: holistic identification of glomerular cells, tissues, and lesions in human and mouse histopathology. Glo-In-One-v2:人类和小鼠组织病理学肾小球细胞、组织和病变的整体鉴定。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-07-28 DOI: 10.1117/1.JMI.12.6.061406
Lining Yu, Mengmeng Yin, Ruining Deng, Quan Liu, Tianyuan Yao, Can Cui, Junlin Guo, Yu Wang, Yaohong Wang, Shilin Zhao, Haichun Yang, Yuankai Huo

Purpose: Segmenting intraglomerular tissue and glomerular lesions traditionally depends on detailed morphological evaluations by expert nephropathologists, a labor-intensive process susceptible to interobserver variability. Our group previously developed the Glo-In-One toolkit for integrated glomerulus detection and segmentation. We leverage the Glo-In-One toolkit to version 2 (Glo-In-One-v2), which adds fine-grained segmentation capabilities. We curated 14 distinct labels spanning tissue regions, cells, and lesions across 23,529 annotated glomeruli from human and mouse histopathology data. To our knowledge, this dataset is among the largest of its kind to date.

Approach: We present a single dynamic-head deep learning architecture for segmenting 14 classes within partially labeled images from human and mouse kidney pathology. The model was trained on data derived from 368 annotated kidney whole-slide images with five key intraglomerular tissue types and nine glomerular lesion types.

Results: The glomerulus segmentation model achieved a decent performance compared with baselines and achieved a 76.5% average Dice similarity coefficient. In addition, transfer learning from rodent to human for the glomerular lesion segmentation model has enhanced the average segmentation accuracy across different types of lesions by more than 3%, as measured by Dice scores.

Conclusions: We introduce a convolutional neural network for multiclass segmentation of intraglomerular tissue and lesions. The Glo-In-One-v2 model and pretrained weight are publicly available at https://github.com/hrlblab/Glo-In-One_v2.

目的:分割肾小球内组织和肾小球病变传统上依赖于肾病理学专家详细的形态学评估,这是一个劳动密集型的过程,容易受到观察者之间的差异。我们的团队之前开发了glo - one工具包,用于综合肾小球检测和分割。我们将gloo - in - one工具包利用到版本2 (gloo - in - one -v2),它增加了细粒度分段功能。我们整理了14个不同的标签,跨越组织区域、细胞和病变,涵盖23,529个来自人和小鼠组织病理学数据的注释肾小球。据我们所知,这个数据集是迄今为止同类数据集中最大的。方法:我们提出了一个单一的动态头部深度学习架构,用于分割来自人类和小鼠肾脏病理的部分标记图像中的14个类。该模型的训练数据来自368张带注释的肾脏全片图像,其中包括5种关键肾小球内组织类型和9种肾小球病变类型。结果:与基线相比,肾小球分割模型取得了较好的性能,平均Dice相似系数达到76.5%。此外,对于肾小球病变分割模型,从啮齿动物到人类的迁移学习将不同类型病变的平均分割准确率提高了3%以上(以Dice分数衡量)。结论:采用卷积神经网络对肾小球内组织和病变进行多分类分割。gloin - one -v2模型和预训练权重可在https://github.com/hrlblab/Glo-In-One_v2上公开获取。
{"title":"Glo-In-One-v2: holistic identification of glomerular cells, tissues, and lesions in human and mouse histopathology.","authors":"Lining Yu, Mengmeng Yin, Ruining Deng, Quan Liu, Tianyuan Yao, Can Cui, Junlin Guo, Yu Wang, Yaohong Wang, Shilin Zhao, Haichun Yang, Yuankai Huo","doi":"10.1117/1.JMI.12.6.061406","DOIUrl":"10.1117/1.JMI.12.6.061406","url":null,"abstract":"<p><strong>Purpose: </strong>Segmenting intraglomerular tissue and glomerular lesions traditionally depends on detailed morphological evaluations by expert nephropathologists, a labor-intensive process susceptible to interobserver variability. Our group previously developed the Glo-In-One toolkit for integrated glomerulus detection and segmentation. We leverage the Glo-In-One toolkit to version 2 (Glo-In-One-v2), which adds fine-grained segmentation capabilities. We curated 14 distinct labels spanning tissue regions, cells, and lesions across 23,529 annotated glomeruli from human and mouse histopathology data. To our knowledge, this dataset is among the largest of its kind to date.</p><p><strong>Approach: </strong>We present a single dynamic-head deep learning architecture for segmenting 14 classes within partially labeled images from human and mouse kidney pathology. The model was trained on data derived from 368 annotated kidney whole-slide images with five key intraglomerular tissue types and nine glomerular lesion types.</p><p><strong>Results: </strong>The glomerulus segmentation model achieved a decent performance compared with baselines and achieved a 76.5% average Dice similarity coefficient. In addition, transfer learning from rodent to human for the glomerular lesion segmentation model has enhanced the average segmentation accuracy across different types of lesions by more than 3%, as measured by Dice scores.</p><p><strong>Conclusions: </strong>We introduce a convolutional neural network for multiclass segmentation of intraglomerular tissue and lesions. The Glo-In-One-v2 model and pretrained weight are publicly available at https://github.com/hrlblab/Glo-In-One_v2.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"061406"},"PeriodicalIF":1.7,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12303538/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144733981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparing percent breast density assessments of an AI-based method with expert reader estimates: inter-observer variability. 比较基于人工智能的方法与专家读者估计的乳腺密度百分比评估:观察者之间的可变性。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-06-12 DOI: 10.1117/1.JMI.12.S2.S22011
Stepan Romanov, Sacha Howell, Elaine Harkness, Dafydd Gareth Evans, Sue Astley, Martin Fergie

Purpose: Breast density estimation is an important part of breast cancer risk assessment, as mammographic density is associated with risk. However, density assessed by multiple experts can be subject to high inter-observer variability, so automated methods are increasingly used. We investigate the inter-reader variability and risk prediction for expert assessors and a deep learning approach.

Approach: Screening data from a cohort of 1328 women, case-control matched, was used to compare between two expert readers and between a single reader and a deep learning model, Manchester artificial intelligence - visual analog scale (MAI-VAS). Bland-Altman analysis was used to assess the variability and matched concordance index to assess risk.

Results: Although the mean differences for the two experiments were alike, the limits of agreement between MAI-VAS and a single reader are substantially lower at +SD (standard deviation) 21 (95% CI: 19.65, 21.69) -SD 22 (95% CI: - 22.71 , - 20.68 ) than between two expert readers +SD 31 (95% CI: 32.08, 29.23) -SD 29 (95% CI: - 29.94 , - 27.09 ). In addition, breast cancer risk discrimination for the deep learning method and density readings from a single expert was similar, with a matched concordance of 0.628 (95% CI: 0.598, 0.658) and 0.624 (95% CI: 0.595, 0.654), respectively. The automatic method had a similar inter-view agreement to experts and maintained consistency across density quartiles.

Conclusions: The artificial intelligence breast density assessment tool MAI-VAS has a better inter-observer agreement with a randomly selected expert reader than that between two expert readers. Deep learning-based density methods provide consistent density scores without compromising on breast cancer risk discrimination.

目的:乳腺密度评估是乳腺癌风险评估的重要组成部分,因为乳房x线摄影密度与风险相关。然而,由多位专家评估的密度可能会受到观测者之间高度可变性的影响,因此越来越多地使用自动化方法。我们研究了专家评估和深度学习方法的读者间变异性和风险预测。方法:从1328名女性队列中筛选数据,病例对照匹配,用于比较两名专家读者之间以及单一读者与深度学习模型曼彻斯特人工智能-视觉模拟量表(MAI-VAS)之间的差异。采用Bland-Altman分析评估变异性,匹配一致性指数评估风险。结果:虽然两个实验的平均差异相似,但MAI-VAS和单个读者之间的一致性界限在+SD(标准差)21 (95% CI: 19.65, 21.69) -SD 22 (95% CI: - 22.71, - 20.68)明显低于两个专家读者+SD 31 (95% CI: 32.08, 29.23) -SD 29 (95% CI: - 29.94, - 27.09)。此外,深度学习方法与单个专家的密度读数的乳腺癌风险判别相似,匹配一致性分别为0.628 (95% CI: 0.598, 0.658)和0.624 (95% CI: 0.595, 0.654)。自动方法具有与专家相似的访谈观点一致性,并保持密度四分位数的一致性。结论:人工智能乳腺密度评估工具MAI-VAS与随机选择的专家阅读者之间的观察者间一致性优于两个专家阅读者之间的一致性。基于深度学习的密度方法在不影响乳腺癌风险歧视的情况下提供一致的密度分数。
{"title":"Comparing percent breast density assessments of an AI-based method with expert reader estimates: inter-observer variability.","authors":"Stepan Romanov, Sacha Howell, Elaine Harkness, Dafydd Gareth Evans, Sue Astley, Martin Fergie","doi":"10.1117/1.JMI.12.S2.S22011","DOIUrl":"10.1117/1.JMI.12.S2.S22011","url":null,"abstract":"<p><strong>Purpose: </strong>Breast density estimation is an important part of breast cancer risk assessment, as mammographic density is associated with risk. However, density assessed by multiple experts can be subject to high inter-observer variability, so automated methods are increasingly used. We investigate the inter-reader variability and risk prediction for expert assessors and a deep learning approach.</p><p><strong>Approach: </strong>Screening data from a cohort of 1328 women, case-control matched, was used to compare between two expert readers and between a single reader and a deep learning model, Manchester artificial intelligence - visual analog scale (MAI-VAS). Bland-Altman analysis was used to assess the variability and matched concordance index to assess risk.</p><p><strong>Results: </strong>Although the mean differences for the two experiments were alike, the limits of agreement between MAI-VAS and a single reader are substantially lower at +SD (standard deviation) 21 (95% CI: 19.65, 21.69) -SD 22 (95% CI: <math><mrow><mo>-</mo> <mn>22.71</mn></mrow> </math> , <math><mrow><mo>-</mo> <mn>20.68</mn></mrow> </math> ) than between two expert readers +SD 31 (95% CI: 32.08, 29.23) -SD 29 (95% CI: <math><mrow><mo>-</mo> <mn>29.94</mn></mrow> </math> , <math><mrow><mo>-</mo> <mn>27.09</mn></mrow> </math> ). In addition, breast cancer risk discrimination for the deep learning method and density readings from a single expert was similar, with a matched concordance of 0.628 (95% CI: 0.598, 0.658) and 0.624 (95% CI: 0.595, 0.654), respectively. The automatic method had a similar inter-view agreement to experts and maintained consistency across density quartiles.</p><p><strong>Conclusions: </strong>The artificial intelligence breast density assessment tool MAI-VAS has a better inter-observer agreement with a randomly selected expert reader than that between two expert readers. Deep learning-based density methods provide consistent density scores without compromising on breast cancer risk discrimination.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 2","pages":"S22011"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12159425/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144303313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of synthetic data on training a deep learning model for lesion detection and classification in contrast-enhanced mammography. 合成数据对增强乳房x光造影中病变检测和分类的深度学习模型训练的影响。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-04-28 DOI: 10.1117/1.JMI.12.S2.S22006
Astrid Van Camp, Henry C Woodruff, Lesley Cockmartin, Marc Lobbes, Michael Majer, Corinne Balleyguier, Nicholas W Marshall, Hilde Bosmans, Philippe Lambin

Purpose: Predictive models for contrast-enhanced mammography often perform better at detecting and classifying enhancing masses than (non-enhancing) microcalcification clusters. We aim to investigate whether incorporating synthetic data with simulated microcalcification clusters during training can enhance model performance.

Approach: Microcalcification clusters were simulated in low-energy images of lesion-free breasts from 782 patients, considering local texture features. Enhancement was simulated in the corresponding recombined images. A deep learning (DL) model for lesion detection and classification was trained with varying ratios of synthetic and real (850 patients) data. In addition, a handcrafted radiomics classifier was trained using delineations and class labels from real data, and predictions from both models were ensembled. Validation was performed on internal (212 patients) and external (279 patients) real datasets.

Results: The DL model trained exclusively with synthetic data detected over 60% of malignant lesions. Adding synthetic data to smaller real training sets improved detection sensitivity for malignant lesions but decreased precision. Performance plateaued at a detection sensitivity of 0.80. The ensembled DL and radiomics models performed worse than the standalone DL model, decreasing the area under this receiver operating characteristic curve from 0.75 to 0.60 on the external validation set, likely due to falsely detected suspicious regions of interest.

Conclusions: Synthetic data can enhance DL model performance, provided model setup and data distribution are optimized. The possibility to detect malignant lesions without real data present in the training set confirms the utility of synthetic data. It can serve as a helpful tool, especially when real data are scarce, and it is most effective when complementing real data.

目的:对比增强乳房x线摄影的预测模型通常在检测和分类增强肿块方面优于(非增强)微钙化团簇。我们的目的是研究在训练期间将合成数据与模拟微钙化簇结合是否可以提高模型性能。方法:采用782例无病变乳房低能图像模拟微钙化团簇,考虑局部纹理特征。在相应的重组图像中模拟增强。用不同比例的合成和真实(850例)数据训练了一个用于病变检测和分类的深度学习(DL)模型。此外,使用来自真实数据的描述和类别标签训练了手工制作的放射组学分类器,并集成了两种模型的预测结果。对内部(212例)和外部(279例)真实数据集进行验证。结果:仅用合成数据训练的DL模型对恶性病变的检出率超过60%。将合成数据添加到较小的真实训练集中,提高了对恶性病变的检测灵敏度,但降低了精度。性能在检测灵敏度为0.80时趋于稳定。集成DL和放射组学模型比独立DL模型表现更差,在外部验证集中,接收器工作特征曲线下的面积从0.75减少到0.60,可能是由于错误地检测到可疑的感兴趣区域。结论:在优化模型设置和数据分布的前提下,综合数据可以提高深度学习模型的性能。在训练集中没有真实数据的情况下检测恶性病变的可能性证实了合成数据的实用性。它可以作为一种有用的工具,特别是在真实数据稀缺的情况下,并且在补充真实数据时最有效。
{"title":"Impact of synthetic data on training a deep learning model for lesion detection and classification in contrast-enhanced mammography.","authors":"Astrid Van Camp, Henry C Woodruff, Lesley Cockmartin, Marc Lobbes, Michael Majer, Corinne Balleyguier, Nicholas W Marshall, Hilde Bosmans, Philippe Lambin","doi":"10.1117/1.JMI.12.S2.S22006","DOIUrl":"10.1117/1.JMI.12.S2.S22006","url":null,"abstract":"<p><strong>Purpose: </strong>Predictive models for contrast-enhanced mammography often perform better at detecting and classifying enhancing masses than (non-enhancing) microcalcification clusters. We aim to investigate whether incorporating synthetic data with simulated microcalcification clusters during training can enhance model performance.</p><p><strong>Approach: </strong>Microcalcification clusters were simulated in low-energy images of lesion-free breasts from 782 patients, considering local texture features. Enhancement was simulated in the corresponding recombined images. A deep learning (DL) model for lesion detection and classification was trained with varying ratios of synthetic and real (850 patients) data. In addition, a handcrafted radiomics classifier was trained using delineations and class labels from real data, and predictions from both models were ensembled. Validation was performed on internal (212 patients) and external (279 patients) real datasets.</p><p><strong>Results: </strong>The DL model trained exclusively with synthetic data detected over 60% of malignant lesions. Adding synthetic data to smaller real training sets improved detection sensitivity for malignant lesions but decreased precision. Performance plateaued at a detection sensitivity of 0.80. The ensembled DL and radiomics models performed worse than the standalone DL model, decreasing the area under this receiver operating characteristic curve from 0.75 to 0.60 on the external validation set, likely due to falsely detected suspicious regions of interest.</p><p><strong>Conclusions: </strong>Synthetic data can enhance DL model performance, provided model setup and data distribution are optimized. The possibility to detect malignant lesions without real data present in the training set confirms the utility of synthetic data. It can serve as a helpful tool, especially when real data are scarce, and it is most effective when complementing real data.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 2","pages":"S22006"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12036226/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144001778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of thin-slice and accelerated T1-weighted GRE sequences in 1.5T abdominal magnetic resonance imaging using deep learning image reconstruction. 基于深度学习图像重建的薄层加速t1加权GRE序列在1.5T腹部磁共振成像中的应用
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-12-10 DOI: 10.1117/1.JMI.12.6.064005
Natalie S Joos, Saif Afat, Marcel Dominik Nickel, Elisabeth Weiland, Judith Herrmann, Stephan Ursprung, Haidara Almansour, Andreas Lingg, Sebastian Werner, Bianca Haase, Konstantin Nikolaou, Sebastian Gassenmaier
<p><strong>Purpose: </strong>Deep-learning (DL)-based image reconstruction (DLR) is a key technique for reducing acquisition time (TA) and increasing morphologic resolution in abdominal magnetic resonance imaging (MRI). We aim to compare the performance of a standard ( <math> <mrow> <msub><mrow><mi>VIBE</mi></mrow> <mrow><mi>Std</mi></mrow> </msub> </mrow> </math> ) gradient echo (GRE) sequence with Dixon fat separation versus an accelerated ultra-fast ( <math> <mrow> <msub><mrow><mi>VIBE</mi></mrow> <mrow><mi>UF</mi></mrow> </msub> </mrow> </math> ) and high-resolution ( <math> <mrow> <msub><mrow><mi>VIBE</mi></mrow> <mrow><mi>HR</mi></mrow> </msub> </mrow> </math> ) T1-weighted GRE sequence with Dixon fat separation and DLR.</p><p><strong>Approach: </strong>A total of 50 patients with an abdominal 1.5T MRI, with a mean age of <math><mrow><mn>59</mn> <mo>±</mo> <mn>11</mn></mrow> </math> years, were prospectively included from January to July 2023. Each examination protocol included <math> <mrow> <msub><mrow><mi>VIBE</mi></mrow> <mrow><mi>Std</mi></mrow> </msub> </mrow> </math> , <math> <mrow> <msub><mrow><mi>VIBE</mi></mrow> <mrow><mi>UF</mi></mrow> </msub> </mrow> </math> , and <math> <mrow> <msub><mrow><mi>VIBE</mi></mrow> <mrow><mi>HR</mi></mrow> </msub> </mrow> </math> . Both DL sequences use more aggressive parallel imaging and partial Fourier sampling to reduce TA (slice thickness <math> <mrow> <msub><mrow><mi>VIBE</mi></mrow> <mrow><mi>Std</mi></mrow> </msub> </mrow> </math> and <math> <mrow> <msub><mrow><mi>VIBE</mi></mrow> <mrow><mi>UF</mi></mrow> </msub> </mrow> </math> 3 mm, <math> <mrow> <msub><mrow><mi>VIBE</mi></mrow> <mrow><mi>HR</mi></mrow> </msub> </mrow> </math> 2 mm). Evaluation of each contrast-enhanced datasets for noise, artifacts, sharpness/contrast, overall image quality, and diagnostic confidence was performed independently by four radiologists using a Likert scale of 1 to 5 (5 = best).</p><p><strong>Results: </strong><math> <mrow> <msub><mrow><mi>VIBE</mi></mrow> <mrow><mi>UF</mi></mrow> </msub> </mrow> </math> significantly reduced TA (mean 7.3 s versus 15.0 s ( <math> <mrow> <msub><mrow><mi>VIBE</mi></mrow> <mrow><mi>Std</mi></mrow> </msub> </mrow> </math> ) and 14.5 s ( <math> <mrow> <msub><mrow><mi>VIBE</mi></mrow> <mrow><mi>HR</mi></mrow> </msub> </mrow> </math> ); <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ). Both DL sequences provided significantly better sharpness/contrast for all organs compared with <math> <mrow> <msub><mrow><mi>VIBE</mi></mrow> <mrow><mi>Std</mi></mrow> </msub> </mrow> </math> (median 5 versus 4; <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ). <math> <mrow> <msub><mrow><mi>VIBE</mi></mrow> <mrow><mi>UF</mi></mrow> </msub> </mrow> </math> showed less noise than <math> <mrow> <msub><mrow><mi>VIBE</mi></mrow> <mrow><mi>Std</mi></mrow> </msub> </mrow> </math> (median 5 versus 4; <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ), but <math>
目的:基于深度学习(DL)的图像重建(DLR)是腹部磁共振成像(MRI)中减少采集时间(TA)和提高形态分辨率的关键技术。我们旨在比较Dixon脂肪分离的标准(VIBE Std)梯度回波(GRE)序列与Dixon脂肪分离和DLR的加速超快速(VIBE UF)和高分辨率(VIBE HR) t1加权GRE序列的性能。方法:前瞻性纳入2023年1 - 7月腹部1.5T MRI患者50例,平均年龄59±11岁。每个检查方案包括VIBE Std、VIBE UF和VIBE HR。两个DL序列都使用更积极的平行成像和部分傅立叶采样来减少TA(层厚VIBE Std和VIBE UF为3毫米,VIBE HR为2毫米)。每个对比度增强数据集的噪声、伪影、清晰度/对比度、整体图像质量和诊断置信度的评估由四名放射科医生使用李克特量表(Likert scale) 1到5(5 =最佳)独立进行。结果:VIBE UF显著降低TA(平均7.3 s vs 15.0 s (VIBE Std)和14.5 s (VIBE HR));P 0.001)。与VIBE Std相比,两种DL序列对所有器官的清晰度/对比度都明显更好(中位数为5比4;p 0.001)。VIBE UF表现出比VIBE Std更少的噪声(中位数5比4,p 0.001),但VIBE Std比两个DL序列更少的伪影影响(中位数5比4,p 0.001)。与VIBE Std相比,两种DL序列的总体图像质量都更好(中位数为5比4;p 0.001)。诊断置信度和病变检出率差异无统计学意义(p < 0.05)。结论:基于dl的图像重建显著提高了VIBE UF和VIBE HR的整体图像质量,其中VIBE UF将TA降低了约50%。
{"title":"Application of thin-slice and accelerated T1-weighted GRE sequences in 1.5T abdominal magnetic resonance imaging using deep learning image reconstruction.","authors":"Natalie S Joos, Saif Afat, Marcel Dominik Nickel, Elisabeth Weiland, Judith Herrmann, Stephan Ursprung, Haidara Almansour, Andreas Lingg, Sebastian Werner, Bianca Haase, Konstantin Nikolaou, Sebastian Gassenmaier","doi":"10.1117/1.JMI.12.6.064005","DOIUrl":"https://doi.org/10.1117/1.JMI.12.6.064005","url":null,"abstract":"&lt;p&gt;&lt;strong&gt;Purpose: &lt;/strong&gt;Deep-learning (DL)-based image reconstruction (DLR) is a key technique for reducing acquisition time (TA) and increasing morphologic resolution in abdominal magnetic resonance imaging (MRI). We aim to compare the performance of a standard ( &lt;math&gt; &lt;mrow&gt; &lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;VIBE&lt;/mi&gt;&lt;/mrow&gt; &lt;mrow&gt;&lt;mi&gt;Std&lt;/mi&gt;&lt;/mrow&gt; &lt;/msub&gt; &lt;/mrow&gt; &lt;/math&gt; ) gradient echo (GRE) sequence with Dixon fat separation versus an accelerated ultra-fast ( &lt;math&gt; &lt;mrow&gt; &lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;VIBE&lt;/mi&gt;&lt;/mrow&gt; &lt;mrow&gt;&lt;mi&gt;UF&lt;/mi&gt;&lt;/mrow&gt; &lt;/msub&gt; &lt;/mrow&gt; &lt;/math&gt; ) and high-resolution ( &lt;math&gt; &lt;mrow&gt; &lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;VIBE&lt;/mi&gt;&lt;/mrow&gt; &lt;mrow&gt;&lt;mi&gt;HR&lt;/mi&gt;&lt;/mrow&gt; &lt;/msub&gt; &lt;/mrow&gt; &lt;/math&gt; ) T1-weighted GRE sequence with Dixon fat separation and DLR.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Approach: &lt;/strong&gt;A total of 50 patients with an abdominal 1.5T MRI, with a mean age of &lt;math&gt;&lt;mrow&gt;&lt;mn&gt;59&lt;/mn&gt; &lt;mo&gt;±&lt;/mo&gt; &lt;mn&gt;11&lt;/mn&gt;&lt;/mrow&gt; &lt;/math&gt; years, were prospectively included from January to July 2023. Each examination protocol included &lt;math&gt; &lt;mrow&gt; &lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;VIBE&lt;/mi&gt;&lt;/mrow&gt; &lt;mrow&gt;&lt;mi&gt;Std&lt;/mi&gt;&lt;/mrow&gt; &lt;/msub&gt; &lt;/mrow&gt; &lt;/math&gt; , &lt;math&gt; &lt;mrow&gt; &lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;VIBE&lt;/mi&gt;&lt;/mrow&gt; &lt;mrow&gt;&lt;mi&gt;UF&lt;/mi&gt;&lt;/mrow&gt; &lt;/msub&gt; &lt;/mrow&gt; &lt;/math&gt; , and &lt;math&gt; &lt;mrow&gt; &lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;VIBE&lt;/mi&gt;&lt;/mrow&gt; &lt;mrow&gt;&lt;mi&gt;HR&lt;/mi&gt;&lt;/mrow&gt; &lt;/msub&gt; &lt;/mrow&gt; &lt;/math&gt; . Both DL sequences use more aggressive parallel imaging and partial Fourier sampling to reduce TA (slice thickness &lt;math&gt; &lt;mrow&gt; &lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;VIBE&lt;/mi&gt;&lt;/mrow&gt; &lt;mrow&gt;&lt;mi&gt;Std&lt;/mi&gt;&lt;/mrow&gt; &lt;/msub&gt; &lt;/mrow&gt; &lt;/math&gt; and &lt;math&gt; &lt;mrow&gt; &lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;VIBE&lt;/mi&gt;&lt;/mrow&gt; &lt;mrow&gt;&lt;mi&gt;UF&lt;/mi&gt;&lt;/mrow&gt; &lt;/msub&gt; &lt;/mrow&gt; &lt;/math&gt; 3 mm, &lt;math&gt; &lt;mrow&gt; &lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;VIBE&lt;/mi&gt;&lt;/mrow&gt; &lt;mrow&gt;&lt;mi&gt;HR&lt;/mi&gt;&lt;/mrow&gt; &lt;/msub&gt; &lt;/mrow&gt; &lt;/math&gt; 2 mm). Evaluation of each contrast-enhanced datasets for noise, artifacts, sharpness/contrast, overall image quality, and diagnostic confidence was performed independently by four radiologists using a Likert scale of 1 to 5 (5 = best).&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Results: &lt;/strong&gt;&lt;math&gt; &lt;mrow&gt; &lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;VIBE&lt;/mi&gt;&lt;/mrow&gt; &lt;mrow&gt;&lt;mi&gt;UF&lt;/mi&gt;&lt;/mrow&gt; &lt;/msub&gt; &lt;/mrow&gt; &lt;/math&gt; significantly reduced TA (mean 7.3 s versus 15.0 s ( &lt;math&gt; &lt;mrow&gt; &lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;VIBE&lt;/mi&gt;&lt;/mrow&gt; &lt;mrow&gt;&lt;mi&gt;Std&lt;/mi&gt;&lt;/mrow&gt; &lt;/msub&gt; &lt;/mrow&gt; &lt;/math&gt; ) and 14.5 s ( &lt;math&gt; &lt;mrow&gt; &lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;VIBE&lt;/mi&gt;&lt;/mrow&gt; &lt;mrow&gt;&lt;mi&gt;HR&lt;/mi&gt;&lt;/mrow&gt; &lt;/msub&gt; &lt;/mrow&gt; &lt;/math&gt; ); &lt;math&gt;&lt;mrow&gt;&lt;mi&gt;p&lt;/mi&gt; &lt;mo&gt;&lt;&lt;/mo&gt; &lt;mn&gt;0.001&lt;/mn&gt;&lt;/mrow&gt; &lt;/math&gt; ). Both DL sequences provided significantly better sharpness/contrast for all organs compared with &lt;math&gt; &lt;mrow&gt; &lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;VIBE&lt;/mi&gt;&lt;/mrow&gt; &lt;mrow&gt;&lt;mi&gt;Std&lt;/mi&gt;&lt;/mrow&gt; &lt;/msub&gt; &lt;/mrow&gt; &lt;/math&gt; (median 5 versus 4; &lt;math&gt;&lt;mrow&gt;&lt;mi&gt;p&lt;/mi&gt; &lt;mo&gt;&lt;&lt;/mo&gt; &lt;mn&gt;0.001&lt;/mn&gt;&lt;/mrow&gt; &lt;/math&gt; ). &lt;math&gt; &lt;mrow&gt; &lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;VIBE&lt;/mi&gt;&lt;/mrow&gt; &lt;mrow&gt;&lt;mi&gt;UF&lt;/mi&gt;&lt;/mrow&gt; &lt;/msub&gt; &lt;/mrow&gt; &lt;/math&gt; showed less noise than &lt;math&gt; &lt;mrow&gt; &lt;msub&gt;&lt;mrow&gt;&lt;mi&gt;VIBE&lt;/mi&gt;&lt;/mrow&gt; &lt;mrow&gt;&lt;mi&gt;Std&lt;/mi&gt;&lt;/mrow&gt; &lt;/msub&gt; &lt;/mrow&gt; &lt;/math&gt; (median 5 versus 4; &lt;math&gt;&lt;mrow&gt;&lt;mi&gt;p&lt;/mi&gt; &lt;mo&gt;&lt;&lt;/mo&gt; &lt;mn&gt;0.001&lt;/mn&gt;&lt;/mrow&gt; &lt;/math&gt; ), but &lt;math&gt; ","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"064005"},"PeriodicalIF":1.7,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12694749/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145745166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust evaluation of tissue-specific radiomic features for classifying breast tissue density grades. 对乳腺组织密度分级的组织特异性放射学特征进行稳健评估。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-05-29 DOI: 10.1117/1.JMI.12.S2.S22010
Vincent Dong, Walter Mankowski, Telmo M Silva Filho, Anne Marie McCarthy, Despina Kontos, Andrew D A Maidment, Bruno Barufaldi

Purpose: Breast cancer risk depends on an accurate assessment of breast density due to lesion masking. Although governed by standardized guidelines, radiologist assessment of breast density is still highly variable. Automated breast density assessment tools leverage deep learning but are limited by model robustness and interpretability.

Approach: We assessed the robustness of a feature selection methodology (RFE-SHAP) for classifying breast density grades using tissue-specific radiomic features extracted from raw central projections of digital breast tomosynthesis screenings ( n I = 651 , n II = 100 ). RFE-SHAP leverages traditional and explainable AI methods to identify highly predictive and influential features. A simple logistic regression (LR) classifier was used to assess classification performance, and unsupervised clustering was employed to investigate the intrinsic separability of density grade classes.

Results: LR classifiers yielded cross-validated areas under the receiver operating characteristic (AUCs) per density grade of [ A : 0.909 ± 0.032 , B : 0.858 ± 0.027 , C : 0.927 ± 0.013 , D : 0.890 ± 0.089 ] and an AUC of 0.936 ± 0.016 for classifying patients as nondense or dense. In external validation, we observed per density grade AUCs of [ A : 0.880, B : 0.779, C : 0.878, D : 0.673] and nondense/dense AUC of 0.823. Unsupervised clustering highlighted the ability of these features to characterize different density grades.

Conclusions: Our RFE-SHAP feature selection methodology for classifying breast tissue density generalized well to validation datasets after accounting for natural class imbalance, and the identified radiomic features properly captured the progression of density grades. Our results potentiate future research into correlating selected radiomic features with clinical descriptors of breast tissue density.

目的:乳腺癌的风险取决于对乳腺密度的准确评估,因为病变掩盖。尽管有标准化的指导方针,放射科医生对乳腺密度的评估仍然是高度可变的。自动乳腺密度评估工具利用深度学习,但受到模型鲁棒性和可解释性的限制。方法:我们评估了特征选择方法(fe - shap)的稳健性,该方法使用从数字乳房断层合成筛查的原始中心投影中提取的组织特异性放射学特征来分类乳腺密度等级(n I = 651, n II = 100)。RFE-SHAP利用传统和可解释的人工智能方法来识别具有高度预测性和影响力的特征。采用简单逻辑回归(LR)分类器评估分类性能,采用无监督聚类研究密度等级类的内在可分性。结果:LR分类器在每个密度等级下的受试者操作特征(AUC)交叉验证面积为[A: 0.909±0.032,B: 0.858±0.027,C: 0.927±0.013,D: 0.890±0.089],非致密或致密患者分类的AUC为0.936±0.016。在外部验证中,我们观察到每个密度等级的AUC为[A: 0.880, B: 0.779, C: 0.878, D: 0.673],非密集/密集AUC为0.823。无监督聚类突出了这些特征表征不同密度等级的能力。结论:我们的rf - shap特征选择方法用于乳腺组织密度分类,在考虑了自然类别不平衡后,可以很好地推广到验证数据集,并且确定的放射学特征适当地捕获了密度等级的进展。我们的结果增强了未来的研究,将选定的放射学特征与乳腺组织密度的临床描述相关联。
{"title":"Robust evaluation of tissue-specific radiomic features for classifying breast tissue density grades.","authors":"Vincent Dong, Walter Mankowski, Telmo M Silva Filho, Anne Marie McCarthy, Despina Kontos, Andrew D A Maidment, Bruno Barufaldi","doi":"10.1117/1.JMI.12.S2.S22010","DOIUrl":"10.1117/1.JMI.12.S2.S22010","url":null,"abstract":"<p><strong>Purpose: </strong>Breast cancer risk depends on an accurate assessment of breast density due to lesion masking. Although governed by standardized guidelines, radiologist assessment of breast density is still highly variable. Automated breast density assessment tools leverage deep learning but are limited by model robustness and interpretability.</p><p><strong>Approach: </strong>We assessed the robustness of a feature selection methodology (RFE-SHAP) for classifying breast density grades using tissue-specific radiomic features extracted from raw central projections of digital breast tomosynthesis screenings ( <math> <mrow> <msub><mrow><mi>n</mi></mrow> <mrow><mi>I</mi></mrow> </msub> <mo>=</mo> <mn>651</mn></mrow> </math> , <math> <mrow> <msub><mrow><mi>n</mi></mrow> <mrow><mi>II</mi></mrow> </msub> <mo>=</mo> <mn>100</mn></mrow> </math> ). RFE-SHAP leverages traditional and explainable AI methods to identify highly predictive and influential features. A simple logistic regression (LR) classifier was used to assess classification performance, and unsupervised clustering was employed to investigate the intrinsic separability of density grade classes.</p><p><strong>Results: </strong>LR classifiers yielded cross-validated areas under the receiver operating characteristic (AUCs) per density grade of [ <math><mrow><mi>A</mi></mrow> </math> : <math><mrow><mn>0.909</mn> <mo>±</mo> <mn>0.032</mn></mrow> </math> , <math><mrow><mi>B</mi></mrow> </math> : <math><mrow><mn>0.858</mn> <mo>±</mo> <mn>0.027</mn></mrow> </math> , <math><mrow><mi>C</mi></mrow> </math> : <math><mrow><mn>0.927</mn> <mo>±</mo> <mn>0.013</mn></mrow> </math> , <math><mrow><mi>D</mi></mrow> </math> : <math><mrow><mn>0.890</mn> <mo>±</mo> <mn>0.089</mn></mrow> </math> ] and an AUC of <math><mrow><mn>0.936</mn> <mo>±</mo> <mn>0.016</mn></mrow> </math> for classifying patients as nondense or dense. In external validation, we observed per density grade AUCs of [ <math><mrow><mi>A</mi></mrow> </math> : 0.880, <math><mrow><mi>B</mi></mrow> </math> : 0.779, <math><mrow><mi>C</mi></mrow> </math> : 0.878, <math><mrow><mi>D</mi></mrow> </math> : 0.673] and nondense/dense AUC of 0.823. Unsupervised clustering highlighted the ability of these features to characterize different density grades.</p><p><strong>Conclusions: </strong>Our RFE-SHAP feature selection methodology for classifying breast tissue density generalized well to validation datasets after accounting for natural class imbalance, and the identified radiomic features properly captured the progression of density grades. Our results potentiate future research into correlating selected radiomic features with clinical descriptors of breast tissue density.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 2","pages":"S22010"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12120562/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144200544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing mammographic density change within individuals across screening rounds using deep learning-based software. 使用基于深度学习的软件评估个体在筛查轮中的乳房x光密度变化。
IF 1.7 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-08-13 DOI: 10.1117/1.JMI.12.S2.S22017
Jakob Olinder, Daniel Förnvik, Victor Dahlblom, Viktor Lu, Anna Åkesson, Kristin Johnson, Sophia Zackrisson

Purpose: The purposes are to evaluate the change in mammographic density within individuals across screening rounds using automatic density software, to evaluate whether a change in breast density is associated with a future breast cancer diagnosis, and to provide insight into breast density evolution.

Approach: Mammographic breast density was analyzed in women screened in Malmö, Sweden, between 2010 and 2015 who had undergone at least two consecutive screening rounds < 30 months apart. The volumetric and area-based densities were measured with deep learning-based software and fully automated software, respectively. The change in volumetric breast density percentage (VBD%) between two consecutive screening examinations was determined. Multiple linear regression was used to investigate the association between VBD% change in percentage points and future breast cancer, as well as the initial VBD%, adjusting for age group and the time between examinations. Examinations with potential positioning issues were removed in a sensitivity analysis.

Results: In 26,056 included women, the mean VBD% decreased from 10.7% [95% confidence interval (CI) 10.6 to 10.8] to 10.3% (95% CI: 10.2 to 10.3) ( p < 0.001 ) between the two examinations. The decline in VBD% was more pronounced in women with initially denser breasts (adjusted β = - 0.10 , p < 0.001 ) and less pronounced in women with a future breast cancer diagnosis (adjusted β = 0.16 , p = 0.02 ).

Conclusions: The demonstrated density changes over time support the potential of using breast density change in risk assessment tools and provide insights for future risk-based screening.

目的:目的是使用自动密度软件评估个体在筛查轮次中乳房x线摄影密度的变化,评估乳腺密度的变化是否与未来的乳腺癌诊断相关,并为乳腺密度的演变提供见解。方法:对2010年至2015年间在瑞典Malmö接受筛查的女性进行乳房x线摄影乳腺密度分析,这些女性至少连续两次筛查,间隔30个月。采用基于深度学习的软件和全自动软件分别测量体积密度和面积密度。确定两次连续筛查检查之间乳腺体积密度百分比(VBD%)的变化。采用多元线性回归来研究VBD百分比百分比变化与未来乳腺癌之间的关系,以及调整年龄组和检查间隔时间后的初始VBD百分比。在敏感性分析中,排除了有潜在定位问题的检查。结果:在纳入的26,056名女性中,两次检查之间的平均VBD%从10.7%[95%可信区间(CI) 10.6至10.8]降至10.3% (95% CI: 10.2至10.3)(p 0.001)。VBD%的下降在最初乳房密度较大的女性中更为明显(调整后的β = - 0.10, p = 0.001),而在未来诊断为乳腺癌的女性中不太明显(调整后的β = 0.16, p = 0.02)。结论:所证实的密度随时间的变化支持了使用乳腺密度变化作为风险评估工具的潜力,并为未来基于风险的筛查提供了见解。
{"title":"Assessing mammographic density change within individuals across screening rounds using deep learning-based software.","authors":"Jakob Olinder, Daniel Förnvik, Victor Dahlblom, Viktor Lu, Anna Åkesson, Kristin Johnson, Sophia Zackrisson","doi":"10.1117/1.JMI.12.S2.S22017","DOIUrl":"10.1117/1.JMI.12.S2.S22017","url":null,"abstract":"<p><strong>Purpose: </strong>The purposes are to evaluate the change in mammographic density within individuals across screening rounds using automatic density software, to evaluate whether a change in breast density is associated with a future breast cancer diagnosis, and to provide insight into breast density evolution.</p><p><strong>Approach: </strong>Mammographic breast density was analyzed in women screened in Malmö, Sweden, between 2010 and 2015 who had undergone at least two consecutive screening rounds <math><mrow><mo><</mo> <mn>30</mn></mrow> </math> months apart. The volumetric and area-based densities were measured with deep learning-based software and fully automated software, respectively. The change in volumetric breast density percentage (VBD%) between two consecutive screening examinations was determined. Multiple linear regression was used to investigate the association between VBD% change in percentage points and future breast cancer, as well as the initial VBD%, adjusting for age group and the time between examinations. Examinations with potential positioning issues were removed in a sensitivity analysis.</p><p><strong>Results: </strong>In 26,056 included women, the mean VBD% decreased from 10.7% [95% confidence interval (CI) 10.6 to 10.8] to 10.3% (95% CI: 10.2 to 10.3) ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ) between the two examinations. The decline in VBD% was more pronounced in women with initially denser breasts (adjusted <math><mrow><mi>β</mi> <mo>=</mo> <mo>-</mo> <mn>0.10</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ) and less pronounced in women with a future breast cancer diagnosis (adjusted <math><mrow><mi>β</mi> <mo>=</mo> <mn>0.16</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.02</mn></mrow> </math> ).</p><p><strong>Conclusions: </strong>The demonstrated density changes over time support the potential of using breast density change in risk assessment tools and provide insights for future risk-based screening.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 2","pages":"S22017"},"PeriodicalIF":1.7,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12350635/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144876009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Medical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1