利用深度学习在肾脏 CT 成像中进行基于分割的定量测量。

IF 3.7 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING European Radiology Experimental Pub Date : 2024-10-09 DOI:10.1186/s41747-024-00507-4
Konstantinos Koukoutegos, Richard 's Heeren, Liesbeth De Wever, Frederik De Keyzer, Frederik Maes, Hilde Bosmans
{"title":"利用深度学习在肾脏 CT 成像中进行基于分割的定量测量。","authors":"Konstantinos Koukoutegos, Richard 's Heeren, Liesbeth De Wever, Frederik De Keyzer, Frederik Maes, Hilde Bosmans","doi":"10.1186/s41747-024-00507-4","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Renal quantitative measurements are important descriptors for assessing kidney function. We developed a deep learning-based method for automated kidney measurements from computed tomography (CT) images.</p><p><strong>Methods: </strong>The study datasets comprised potential kidney donors (n = 88), both contrast-enhanced (Dataset 1 CE) and noncontrast (Dataset 1 NC) CT scans, and test sets of contrast-enhanced cases (Test set 2, n = 18), cases from a photon-counting (PC)CT scanner reconstructed at 60 and 190 keV (Test set 3 PCCT, n = 15), and low-dose cases (Test set 4, n = 8), which were retrospectively analyzed to train, validate, and test two networks for kidney segmentation and subsequent measurements. Segmentation performance was evaluated using the Dice similarity coefficient (DSC). The quantitative measurements' effectiveness was compared to manual annotations using the intraclass correlation coefficient (ICC).</p><p><strong>Results: </strong>The contrast-enhanced and noncontrast models demonstrated excellent reliability in renal segmentation with DSC of 0.95 (Test set 1 CE), 0.94 (Test set 2), 0.92 (Test set 3 PCCT) and 0.94 (Test set 1 NC), 0.92 (Test set 3 PCCT), and 0.93 (Test set 4). Volume estimation was accurate with mean volume errors of 4%, 3%, 6% mL (contrast test sets) and 4%, 5%, 7% mL (noncontrast test sets). Renal axes measurements (length, width, and thickness) had ICC values greater than 0.90 (p < 0.001) for all test sets, supported by narrow 95% confidence intervals.</p><p><strong>Conclusion: </strong>Two deep learning networks were shown to derive quantitative measurements from contrast-enhanced and noncontrast renal CT imaging at the human performance level.</p><p><strong>Relevance statement: </strong>Deep learning-based networks can automatically obtain renal clinical descriptors from both noncontrast and contrast-enhanced CT images. When healthy subjects comprise the training cohort, careful consideration is required during model adaptation, especially in scenarios involving unhealthy kidneys. This creates an opportunity for improved clinical decision-making without labor-intensive manual effort.</p><p><strong>Key points: </strong>Trained 3D UNet models quantify renal measurements from contrast and noncontrast CT. The models performed interchangeably to the manual annotator and to each other. The models can provide expert-level, quantitative, accurate, and rapid renal measurements.</p>","PeriodicalId":36926,"journal":{"name":"European Radiology Experimental","volume":"8 1","pages":"110"},"PeriodicalIF":3.7000,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11465135/pdf/","citationCount":"0","resultStr":"{\"title\":\"Segmentation-based quantitative measurements in renal CT imaging using deep learning.\",\"authors\":\"Konstantinos Koukoutegos, Richard 's Heeren, Liesbeth De Wever, Frederik De Keyzer, Frederik Maes, Hilde Bosmans\",\"doi\":\"10.1186/s41747-024-00507-4\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Renal quantitative measurements are important descriptors for assessing kidney function. We developed a deep learning-based method for automated kidney measurements from computed tomography (CT) images.</p><p><strong>Methods: </strong>The study datasets comprised potential kidney donors (n = 88), both contrast-enhanced (Dataset 1 CE) and noncontrast (Dataset 1 NC) CT scans, and test sets of contrast-enhanced cases (Test set 2, n = 18), cases from a photon-counting (PC)CT scanner reconstructed at 60 and 190 keV (Test set 3 PCCT, n = 15), and low-dose cases (Test set 4, n = 8), which were retrospectively analyzed to train, validate, and test two networks for kidney segmentation and subsequent measurements. Segmentation performance was evaluated using the Dice similarity coefficient (DSC). The quantitative measurements' effectiveness was compared to manual annotations using the intraclass correlation coefficient (ICC).</p><p><strong>Results: </strong>The contrast-enhanced and noncontrast models demonstrated excellent reliability in renal segmentation with DSC of 0.95 (Test set 1 CE), 0.94 (Test set 2), 0.92 (Test set 3 PCCT) and 0.94 (Test set 1 NC), 0.92 (Test set 3 PCCT), and 0.93 (Test set 4). Volume estimation was accurate with mean volume errors of 4%, 3%, 6% mL (contrast test sets) and 4%, 5%, 7% mL (noncontrast test sets). Renal axes measurements (length, width, and thickness) had ICC values greater than 0.90 (p < 0.001) for all test sets, supported by narrow 95% confidence intervals.</p><p><strong>Conclusion: </strong>Two deep learning networks were shown to derive quantitative measurements from contrast-enhanced and noncontrast renal CT imaging at the human performance level.</p><p><strong>Relevance statement: </strong>Deep learning-based networks can automatically obtain renal clinical descriptors from both noncontrast and contrast-enhanced CT images. When healthy subjects comprise the training cohort, careful consideration is required during model adaptation, especially in scenarios involving unhealthy kidneys. This creates an opportunity for improved clinical decision-making without labor-intensive manual effort.</p><p><strong>Key points: </strong>Trained 3D UNet models quantify renal measurements from contrast and noncontrast CT. The models performed interchangeably to the manual annotator and to each other. The models can provide expert-level, quantitative, accurate, and rapid renal measurements.</p>\",\"PeriodicalId\":36926,\"journal\":{\"name\":\"European Radiology Experimental\",\"volume\":\"8 1\",\"pages\":\"110\"},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2024-10-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11465135/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"European Radiology Experimental\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1186/s41747-024-00507-4\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Radiology Experimental","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1186/s41747-024-00507-4","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

摘要

背景:肾脏定量测量是评估肾功能的重要描述指标。我们开发了一种基于深度学习的方法,用于从计算机断层扫描(CT)图像中自动测量肾脏:研究数据集包括潜在的肾脏捐献者(n = 88)、对比增强型(数据集 1 CE)和非对比增强型(数据集 1 NC)CT 扫描以及对比增强型病例测试集(测试集 2,n = 18)、测试集 3 PCCT,n = 15)和低剂量病例(测试集 4,n = 8),对这些病例进行回顾性分析,以训练、验证和测试用于肾脏分割和后续测量的两个网络。使用 Dice 相似性系数 (DSC) 评估分割性能。使用类内相关系数(ICC)比较了定量测量与人工标注的效果:结果:对比度增强和非对比度模型在肾脏分割方面表现出极佳的可靠性,DSC 分别为 0.95(测试集 1 CE)、0.94(测试集 2)、0.92(测试集 3 PCCT)和 0.94(测试集 1 NC)、0.92(测试集 3 PCCT)和 0.93(测试集 4)。体积估计准确,平均体积误差分别为 4%、3% 和 6% 毫升(对比度测试集)以及 4%、5% 和 7% 毫升(非对比度测试集)。肾轴测量(长度、宽度和厚度)的 ICC 值大于 0.90(p 结论:肾轴测量的 ICC 值大于 0.90:研究表明,两个深度学习网络能从对比度增强和非对比度肾脏 CT 成像中得出定量测量结果,达到了人类水平:基于深度学习的网络可以从非对比度和对比度增强 CT 图像中自动获取肾脏临床描述符。当健康受试者组成训练队列时,在模型适应过程中需要仔细考虑,尤其是在涉及不健康肾脏的情况下。这为改进临床决策提供了机会,而无需耗费大量人力:训练有素的三维 UNet 模型可量化造影剂和非造影剂 CT 的肾脏测量结果。这些模型可与手动注释器和其他模型互换。这些模型可提供专家级、定量、准确和快速的肾脏测量结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Segmentation-based quantitative measurements in renal CT imaging using deep learning.

Background: Renal quantitative measurements are important descriptors for assessing kidney function. We developed a deep learning-based method for automated kidney measurements from computed tomography (CT) images.

Methods: The study datasets comprised potential kidney donors (n = 88), both contrast-enhanced (Dataset 1 CE) and noncontrast (Dataset 1 NC) CT scans, and test sets of contrast-enhanced cases (Test set 2, n = 18), cases from a photon-counting (PC)CT scanner reconstructed at 60 and 190 keV (Test set 3 PCCT, n = 15), and low-dose cases (Test set 4, n = 8), which were retrospectively analyzed to train, validate, and test two networks for kidney segmentation and subsequent measurements. Segmentation performance was evaluated using the Dice similarity coefficient (DSC). The quantitative measurements' effectiveness was compared to manual annotations using the intraclass correlation coefficient (ICC).

Results: The contrast-enhanced and noncontrast models demonstrated excellent reliability in renal segmentation with DSC of 0.95 (Test set 1 CE), 0.94 (Test set 2), 0.92 (Test set 3 PCCT) and 0.94 (Test set 1 NC), 0.92 (Test set 3 PCCT), and 0.93 (Test set 4). Volume estimation was accurate with mean volume errors of 4%, 3%, 6% mL (contrast test sets) and 4%, 5%, 7% mL (noncontrast test sets). Renal axes measurements (length, width, and thickness) had ICC values greater than 0.90 (p < 0.001) for all test sets, supported by narrow 95% confidence intervals.

Conclusion: Two deep learning networks were shown to derive quantitative measurements from contrast-enhanced and noncontrast renal CT imaging at the human performance level.

Relevance statement: Deep learning-based networks can automatically obtain renal clinical descriptors from both noncontrast and contrast-enhanced CT images. When healthy subjects comprise the training cohort, careful consideration is required during model adaptation, especially in scenarios involving unhealthy kidneys. This creates an opportunity for improved clinical decision-making without labor-intensive manual effort.

Key points: Trained 3D UNet models quantify renal measurements from contrast and noncontrast CT. The models performed interchangeably to the manual annotator and to each other. The models can provide expert-level, quantitative, accurate, and rapid renal measurements.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
European Radiology Experimental
European Radiology Experimental Medicine-Radiology, Nuclear Medicine and Imaging
CiteScore
6.70
自引率
2.60%
发文量
56
审稿时长
18 weeks
期刊最新文献
An AI deep learning algorithm for detecting pulmonary nodules on ultra-low-dose CT in an emergency setting: a reader study. Evaluation of pulmonary artery pressure, blood indices, and myocardial microcirculation in rats returning from high altitude to moderate altitude. Image biomarkers and explainable AI: handcrafted features versus deep learned features. Technical feasibility of automated blur detection in digital mammography using convolutional neural network. Quantification of breast biopsy clip marker artifact on routine breast MRI sequences: a phantom study.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1