W-DRAG:为生成网络优化的 WGAN 与数据随机增强联合框架,用于双能量 CT 中的骨髓水肿检测

IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Computerized Medical Imaging and Graphics Pub Date : 2024-04-24 DOI:10.1016/j.compmedimag.2024.102387
Chunsu Park , Jeong-Woon Kang , Doen-Eon Lee , Wookon Son , Sang-Min Lee , Chankue Park , MinWoo Kim
{"title":"W-DRAG:为生成网络优化的 WGAN 与数据随机增强联合框架,用于双能量 CT 中的骨髓水肿检测","authors":"Chunsu Park ,&nbsp;Jeong-Woon Kang ,&nbsp;Doen-Eon Lee ,&nbsp;Wookon Son ,&nbsp;Sang-Min Lee ,&nbsp;Chankue Park ,&nbsp;MinWoo Kim","doi":"10.1016/j.compmedimag.2024.102387","DOIUrl":null,"url":null,"abstract":"<div><p>Dual-energy computed tomography (CT) is an excellent substitute for identifying bone marrow edema in magnetic resonance imaging. However, it is rarely used in practice owing to its low contrast. To overcome this problem, we constructed a framework based on deep learning techniques to screen for diseases using axial bone images and to identify the local positions of bone lesions. To address the limited availability of labeled samples, we developed a new generative adversarial network (GAN) that extends expressions beyond conventional augmentation (CA) methods based on geometric transformations. We theoretically and experimentally determined that combining the concepts of data augmentation optimized for GAN training (DAG) and Wasserstein GAN yields a considerably stable generation of synthetic images and effectively aligns their distribution with that of real images, thereby achieving a high degree of similarity. The classification model was trained using real and synthetic samples. Consequently, the GAN technique used in the diagnostic test had an improved F1 score of approximately 7.8% compared with CA. The final F1 score was 80.24%, and the recall and precision were 84.3% and 88.7%, respectively. The results obtained using the augmented samples outperformed those obtained using pure real samples without augmentation. In addition, we adopted explainable AI techniques that leverage a class activation map (CAM) and principal component analysis to facilitate visual analysis of the network’s results. The framework was designed to suggest an attention map and scattering plot to visually explain the disease predictions of the network.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"115 ","pages":"Article 102387"},"PeriodicalIF":5.4000,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124000648/pdfft?md5=340b576800836a42ff054a8829a2c44e&pid=1-s2.0-S0895611124000648-main.pdf","citationCount":"0","resultStr":"{\"title\":\"W-DRAG: A joint framework of WGAN with data random augmentation optimized for generative networks for bone marrow edema detection in dual energy CT\",\"authors\":\"Chunsu Park ,&nbsp;Jeong-Woon Kang ,&nbsp;Doen-Eon Lee ,&nbsp;Wookon Son ,&nbsp;Sang-Min Lee ,&nbsp;Chankue Park ,&nbsp;MinWoo Kim\",\"doi\":\"10.1016/j.compmedimag.2024.102387\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Dual-energy computed tomography (CT) is an excellent substitute for identifying bone marrow edema in magnetic resonance imaging. However, it is rarely used in practice owing to its low contrast. To overcome this problem, we constructed a framework based on deep learning techniques to screen for diseases using axial bone images and to identify the local positions of bone lesions. To address the limited availability of labeled samples, we developed a new generative adversarial network (GAN) that extends expressions beyond conventional augmentation (CA) methods based on geometric transformations. We theoretically and experimentally determined that combining the concepts of data augmentation optimized for GAN training (DAG) and Wasserstein GAN yields a considerably stable generation of synthetic images and effectively aligns their distribution with that of real images, thereby achieving a high degree of similarity. The classification model was trained using real and synthetic samples. Consequently, the GAN technique used in the diagnostic test had an improved F1 score of approximately 7.8% compared with CA. The final F1 score was 80.24%, and the recall and precision were 84.3% and 88.7%, respectively. The results obtained using the augmented samples outperformed those obtained using pure real samples without augmentation. In addition, we adopted explainable AI techniques that leverage a class activation map (CAM) and principal component analysis to facilitate visual analysis of the network’s results. The framework was designed to suggest an attention map and scattering plot to visually explain the disease predictions of the network.</p></div>\",\"PeriodicalId\":50631,\"journal\":{\"name\":\"Computerized Medical Imaging and Graphics\",\"volume\":\"115 \",\"pages\":\"Article 102387\"},\"PeriodicalIF\":5.4000,\"publicationDate\":\"2024-04-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S0895611124000648/pdfft?md5=340b576800836a42ff054a8829a2c44e&pid=1-s2.0-S0895611124000648-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computerized Medical Imaging and Graphics\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0895611124000648\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computerized Medical Imaging and Graphics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0895611124000648","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

摘要

双能计算机断层扫描(CT)是在磁共振成像中识别骨髓水肿的最佳替代方法。然而,由于其对比度低,在实践中很少使用。为了克服这一问题,我们构建了一个基于深度学习技术的框架,利用轴向骨骼图像筛查疾病,并识别骨骼病变的局部位置。为了解决标注样本有限的问题,我们开发了一种新的生成对抗网络(GAN),其表达方式超越了基于几何变换的传统增强(CA)方法。我们从理论和实验上确定,将针对 GAN 训练进行优化的数据增强(DAG)和 Wasserstein GAN 的概念相结合,可以生成相当稳定的合成图像,并有效地将其分布与真实图像的分布相一致,从而实现高度相似。分类模型使用真实样本和合成样本进行训练。因此,与 CA 相比,诊断测试中使用的 GAN 技术的 F1 分数提高了约 7.8%。最终的 F1 得分为 80.24%,召回率和精确率分别为 84.3% 和 88.7%。使用增强样本所获得的结果优于使用纯真实样本(无增强)所获得的结果。此外,我们还采用了可解释的人工智能技术,利用类激活图(CAM)和主成分分析来促进对网络结果的可视化分析。该框架旨在通过注意力图和散点图来直观地解释网络的疾病预测结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
W-DRAG: A joint framework of WGAN with data random augmentation optimized for generative networks for bone marrow edema detection in dual energy CT

Dual-energy computed tomography (CT) is an excellent substitute for identifying bone marrow edema in magnetic resonance imaging. However, it is rarely used in practice owing to its low contrast. To overcome this problem, we constructed a framework based on deep learning techniques to screen for diseases using axial bone images and to identify the local positions of bone lesions. To address the limited availability of labeled samples, we developed a new generative adversarial network (GAN) that extends expressions beyond conventional augmentation (CA) methods based on geometric transformations. We theoretically and experimentally determined that combining the concepts of data augmentation optimized for GAN training (DAG) and Wasserstein GAN yields a considerably stable generation of synthetic images and effectively aligns their distribution with that of real images, thereby achieving a high degree of similarity. The classification model was trained using real and synthetic samples. Consequently, the GAN technique used in the diagnostic test had an improved F1 score of approximately 7.8% compared with CA. The final F1 score was 80.24%, and the recall and precision were 84.3% and 88.7%, respectively. The results obtained using the augmented samples outperformed those obtained using pure real samples without augmentation. In addition, we adopted explainable AI techniques that leverage a class activation map (CAM) and principal component analysis to facilitate visual analysis of the network’s results. The framework was designed to suggest an attention map and scattering plot to visually explain the disease predictions of the network.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
10.70
自引率
3.50%
发文量
71
审稿时长
26 days
期刊介绍: The purpose of the journal Computerized Medical Imaging and Graphics is to act as a source for the exchange of research results concerning algorithmic advances, development, and application of digital imaging in disease detection, diagnosis, intervention, prevention, precision medicine, and population health. Included in the journal will be articles on novel computerized imaging or visualization techniques, including artificial intelligence and machine learning, augmented reality for surgical planning and guidance, big biomedical data visualization, computer-aided diagnosis, computerized-robotic surgery, image-guided therapy, imaging scanning and reconstruction, mobile and tele-imaging, radiomics, and imaging integration and modeling with other information relevant to digital health. The types of biomedical imaging include: magnetic resonance, computed tomography, ultrasound, nuclear medicine, X-ray, microwave, optical and multi-photon microscopy, video and sensory imaging, and the convergence of biomedical images with other non-imaging datasets.
期刊最新文献
Exploring transformer reliability in clinically significant prostate cancer segmentation: A comprehensive in-depth investigation. DSIFNet: Implicit feature network for nasal cavity and vestibule segmentation from 3D head CT AFSegNet: few-shot 3D ankle-foot bone segmentation via hierarchical feature distillation and multi-scale attention and fusion VLFATRollout: Fully transformer-based classifier for retinal OCT volumes WISE: Efficient WSI selection for active learning in histopathology
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1