Utilizing domain knowledge to improve the classification of intravenous contrast phase of CT scans

IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Computerized Medical Imaging and Graphics Pub Date : 2025-01-01 DOI:10.1016/j.compmedimag.2024.102458
Liangchen Liu , Jianfei Liu , Bikash Santra , Christopher Parnell , Pritam Mukherjee , Tejas Mathai , Yingying Zhu , Akshaya Anand , Ronald M. Summers
{"title":"Utilizing domain knowledge to improve the classification of intravenous contrast phase of CT scans","authors":"Liangchen Liu ,&nbsp;Jianfei Liu ,&nbsp;Bikash Santra ,&nbsp;Christopher Parnell ,&nbsp;Pritam Mukherjee ,&nbsp;Tejas Mathai ,&nbsp;Yingying Zhu ,&nbsp;Akshaya Anand ,&nbsp;Ronald M. Summers","doi":"10.1016/j.compmedimag.2024.102458","DOIUrl":null,"url":null,"abstract":"<div><div>Multiple intravenous contrast phases of CT scans are commonly used in clinical practice to facilitate disease diagnosis. However, contrast phase information is commonly missing or incorrect due to discrepancies in CT series descriptions and imaging practices. This work aims to develop a classification algorithm to automatically determine the contrast phase of a CT scan. We hypothesize that image intensities of key organs (e.g. aorta, inferior vena cava) affected by contrast enhancement are inherent feature information to decide the contrast phase. These organs are segmented by TotalSegmentator followed by generating intensity features on each segmented organ region. Two internal and one external dataset were collected to validate the classification accuracy. In comparison with the baseline ResNet classification method that did not make use of key organs features, the proposed method achieved the comparable accuracy of 92.5% and F1 score of 92.5% in one internal dataset. The accuracy was improved from 63.9% to 79.8% and F1 score from 43.9% to 65.0% using the proposed method on the other internal dataset. The accuracy improved from 63.5% to 85.1% and the F1 score from 56.4% to 83.9% on the external dataset. Image intensity features from key organs are critical for improving the classification accuracy of contrast phases of CT scans. The classification method based on these features is robust to different scanners and imaging protocols from different institutes. Our results suggested improved classification accuracy over existing approaches, which advances the application of automatic contrast phase classification toward real clinical practice. The code for this work can be found here: (<span><span>https://github.com/rsummers11/CT_Contrast_Phase_Classifier</span><svg><path></path></svg></span>).</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"119 ","pages":"Article 102458"},"PeriodicalIF":5.4000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computerized Medical Imaging and Graphics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0895611124001356","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Multiple intravenous contrast phases of CT scans are commonly used in clinical practice to facilitate disease diagnosis. However, contrast phase information is commonly missing or incorrect due to discrepancies in CT series descriptions and imaging practices. This work aims to develop a classification algorithm to automatically determine the contrast phase of a CT scan. We hypothesize that image intensities of key organs (e.g. aorta, inferior vena cava) affected by contrast enhancement are inherent feature information to decide the contrast phase. These organs are segmented by TotalSegmentator followed by generating intensity features on each segmented organ region. Two internal and one external dataset were collected to validate the classification accuracy. In comparison with the baseline ResNet classification method that did not make use of key organs features, the proposed method achieved the comparable accuracy of 92.5% and F1 score of 92.5% in one internal dataset. The accuracy was improved from 63.9% to 79.8% and F1 score from 43.9% to 65.0% using the proposed method on the other internal dataset. The accuracy improved from 63.5% to 85.1% and the F1 score from 56.4% to 83.9% on the external dataset. Image intensity features from key organs are critical for improving the classification accuracy of contrast phases of CT scans. The classification method based on these features is robust to different scanners and imaging protocols from different institutes. Our results suggested improved classification accuracy over existing approaches, which advances the application of automatic contrast phase classification toward real clinical practice. The code for this work can be found here: (https://github.com/rsummers11/CT_Contrast_Phase_Classifier).
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用领域知识改进CT扫描静脉对比期的分类。
在临床实践中,多次静脉CT造影剂扫描常用于疾病诊断。然而,由于CT系列描述和成像实践的差异,对比相位信息通常缺失或不正确。本工作旨在开发一种分类算法来自动确定CT扫描的对比相位。我们假设受增强影响的关键器官(如主动脉、下腔静脉)的图像强度是决定对比相位的固有特征信息。这些器官被TotalSegmentator分割,然后在每个被分割的器官区域上生成强度特征。收集了两个内部数据集和一个外部数据集来验证分类精度。与未使用关键器官特征的基线ResNet分类方法相比,该方法在一个内部数据集中的准确率为92.5%,F1评分为92.5%。在另一个内部数据集上使用该方法,准确率从63.9%提高到79.8%,F1分数从43.9%提高到65.0%。在外部数据集上,准确率从63.5%提高到85.1%,F1得分从56.4%提高到83.9%。关键器官的图像强度特征对于提高CT扫描对比相位的分类精度至关重要。基于这些特征的分类方法对不同的扫描仪和不同研究所的成像协议具有鲁棒性。我们的研究结果表明,与现有的方法相比,自动对比期分类的准确率有所提高,从而推动了自动对比期分类在实际临床中的应用。这项工作的代码可以在这里找到:(https://github.com/rsummers11/CT_Contrast_Phase_Classifier)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
10.70
自引率
3.50%
发文量
71
审稿时长
26 days
期刊介绍: The purpose of the journal Computerized Medical Imaging and Graphics is to act as a source for the exchange of research results concerning algorithmic advances, development, and application of digital imaging in disease detection, diagnosis, intervention, prevention, precision medicine, and population health. Included in the journal will be articles on novel computerized imaging or visualization techniques, including artificial intelligence and machine learning, augmented reality for surgical planning and guidance, big biomedical data visualization, computer-aided diagnosis, computerized-robotic surgery, image-guided therapy, imaging scanning and reconstruction, mobile and tele-imaging, radiomics, and imaging integration and modeling with other information relevant to digital health. The types of biomedical imaging include: magnetic resonance, computed tomography, ultrasound, nuclear medicine, X-ray, microwave, optical and multi-photon microscopy, video and sensory imaging, and the convergence of biomedical images with other non-imaging datasets.
期刊最新文献
Prior knowledge-based multi-task learning network for pulmonary nodule classification Automatic Joint Lesion Detection by enhancing local feature interaction TQGDNet: Coronary artery calcium deposit detection on computed tomography CGNet: Few-shot learning for Intracranial Hemorrhage Segmentation Weakly supervised multi-modal contrastive learning framework for predicting the HER2 scores in breast cancer
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1