神经胶质瘤分级和分子分型的合作多任务学习和可解释的图像生物标志物。

IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Medical image analysis Pub Date : 2024-12-30 DOI:10.1016/j.media.2024.103435
Qijian Chen, Lihui Wang, Zeyu Deng, Rongpin Wang, Li Wang, Caiqing Jian, Yue-Min Zhu
{"title":"神经胶质瘤分级和分子分型的合作多任务学习和可解释的图像生物标志物。","authors":"Qijian Chen, Lihui Wang, Zeyu Deng, Rongpin Wang, Li Wang, Caiqing Jian, Yue-Min Zhu","doi":"10.1016/j.media.2024.103435","DOIUrl":null,"url":null,"abstract":"<p><p>Deep learning methods have been widely used for various glioma predictions. However, they are usually task-specific, segmentation-dependent and lack of interpretable biomarkers. How to accurately predict the glioma histological grade and molecular subtypes at the same time and provide reliable imaging biomarkers is still challenging. To achieve this, we propose a novel cooperative multi-task learning network (CMTLNet) which consists of a task-common feature extraction (CFE) module, a task-specific unique feature extraction (UFE) module and a unique-common feature collaborative classification (UCFC) module. In CFE, a segmentation-free tumor feature perception (SFTFP) module is first designed to extract the tumor-aware features in a classification manner rather than a segmentation manner. Following that, based on the multi-scale tumor-aware features extracted by SFTFP module, CFE uses convolutional layers to further refine these features, from which the task-common features are learned. In UFE, based on orthogonal projection and conditional classification strategies, the task-specific unique features are extracted. In UCFC, the unique and common features are fused with an attention mechanism to make them adaptive to different glioma prediction tasks. Finally, deep features-guided interpretable radiomic biomarkers for each glioma prediction task are explored by combining SHAP values and correlation analysis. Through the comparisons with recent reported methods on a large multi-center dataset comprising over 1800 cases, we demonstrated the superiority of the proposed CMTLNet, with the mean Matthews correlation coefficient in validation and test sets improved by (4.1%, 10.7%), (3.6%, 23.4%), and (2.7%, 22.7%) respectively for glioma grading, 1p/19q and IDH status prediction tasks. In addition, we found that some radiomic features are highly related to uninterpretable deep features and that their variation trends are consistent in multi-center datasets, which can be taken as reliable imaging biomarkers for glioma diagnosis. The proposed CMTLNet provides an interpretable tool for glioma multi-task prediction, which is beneficial for glioma precise diagnosis and personalized treatment.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103435"},"PeriodicalIF":10.7000,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cooperative multi-task learning and interpretable image biomarkers for glioma grading and molecular subtyping.\",\"authors\":\"Qijian Chen, Lihui Wang, Zeyu Deng, Rongpin Wang, Li Wang, Caiqing Jian, Yue-Min Zhu\",\"doi\":\"10.1016/j.media.2024.103435\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Deep learning methods have been widely used for various glioma predictions. However, they are usually task-specific, segmentation-dependent and lack of interpretable biomarkers. How to accurately predict the glioma histological grade and molecular subtypes at the same time and provide reliable imaging biomarkers is still challenging. To achieve this, we propose a novel cooperative multi-task learning network (CMTLNet) which consists of a task-common feature extraction (CFE) module, a task-specific unique feature extraction (UFE) module and a unique-common feature collaborative classification (UCFC) module. In CFE, a segmentation-free tumor feature perception (SFTFP) module is first designed to extract the tumor-aware features in a classification manner rather than a segmentation manner. Following that, based on the multi-scale tumor-aware features extracted by SFTFP module, CFE uses convolutional layers to further refine these features, from which the task-common features are learned. In UFE, based on orthogonal projection and conditional classification strategies, the task-specific unique features are extracted. In UCFC, the unique and common features are fused with an attention mechanism to make them adaptive to different glioma prediction tasks. Finally, deep features-guided interpretable radiomic biomarkers for each glioma prediction task are explored by combining SHAP values and correlation analysis. Through the comparisons with recent reported methods on a large multi-center dataset comprising over 1800 cases, we demonstrated the superiority of the proposed CMTLNet, with the mean Matthews correlation coefficient in validation and test sets improved by (4.1%, 10.7%), (3.6%, 23.4%), and (2.7%, 22.7%) respectively for glioma grading, 1p/19q and IDH status prediction tasks. In addition, we found that some radiomic features are highly related to uninterpretable deep features and that their variation trends are consistent in multi-center datasets, which can be taken as reliable imaging biomarkers for glioma diagnosis. The proposed CMTLNet provides an interpretable tool for glioma multi-task prediction, which is beneficial for glioma precise diagnosis and personalized treatment.</p>\",\"PeriodicalId\":18328,\"journal\":{\"name\":\"Medical image analysis\",\"volume\":\"101 \",\"pages\":\"103435\"},\"PeriodicalIF\":10.7000,\"publicationDate\":\"2024-12-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Medical image analysis\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1016/j.media.2024.103435\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical image analysis","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1016/j.media.2024.103435","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

深度学习方法已广泛用于各种胶质瘤预测。然而,它们通常是特定于任务的,依赖于片段的,缺乏可解释的生物标志物。如何同时准确预测胶质瘤的组织学分级和分子亚型,并提供可靠的影像学生物标志物仍然是一个挑战。为了实现这一目标,我们提出了一种新的合作多任务学习网络(CMTLNet),该网络由任务公共特征提取(CFE)模块、任务特定唯一特征提取(UFE)模块和唯一公共特征协同分类(UCFC)模块组成。在CFE中,首先设计了无分割的肿瘤特征感知(SFTFP)模块,以分类方式而非分割方式提取肿瘤感知特征。然后,在SFTFP模块提取的多尺度肿瘤感知特征的基础上,CFE使用卷积层对这些特征进行进一步细化,从中学习任务公共特征。在UFE中,基于正交投影和条件分类策略提取特定任务的唯一特征。在UCFC中,独特和共同的特征与注意机制相融合,使其适应不同的胶质瘤预测任务。最后,通过结合SHAP值和相关分析,探索每个胶质瘤预测任务的深度特征引导的可解释放射组学生物标志物。通过在包含1800多个病例的大型多中心数据集上与最近报道的方法进行比较,我们证明了所提出的CMTLNet的优越性,验证集和测试集的平均马修斯相关系数分别提高了(4.1%,10.7%),(3.6%,23.4%)和(2.7%,22.7%),用于胶质瘤评分,1p/19q和IDH状态预测任务。此外,我们发现一些放射学特征与不可解释的深层特征高度相关,并且它们的变化趋势在多中心数据集中是一致的,可以作为胶质瘤诊断的可靠成像生物标志物。提出的CMTLNet为胶质瘤多任务预测提供了一个可解释的工具,有利于胶质瘤的精确诊断和个性化治疗。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Cooperative multi-task learning and interpretable image biomarkers for glioma grading and molecular subtyping.

Deep learning methods have been widely used for various glioma predictions. However, they are usually task-specific, segmentation-dependent and lack of interpretable biomarkers. How to accurately predict the glioma histological grade and molecular subtypes at the same time and provide reliable imaging biomarkers is still challenging. To achieve this, we propose a novel cooperative multi-task learning network (CMTLNet) which consists of a task-common feature extraction (CFE) module, a task-specific unique feature extraction (UFE) module and a unique-common feature collaborative classification (UCFC) module. In CFE, a segmentation-free tumor feature perception (SFTFP) module is first designed to extract the tumor-aware features in a classification manner rather than a segmentation manner. Following that, based on the multi-scale tumor-aware features extracted by SFTFP module, CFE uses convolutional layers to further refine these features, from which the task-common features are learned. In UFE, based on orthogonal projection and conditional classification strategies, the task-specific unique features are extracted. In UCFC, the unique and common features are fused with an attention mechanism to make them adaptive to different glioma prediction tasks. Finally, deep features-guided interpretable radiomic biomarkers for each glioma prediction task are explored by combining SHAP values and correlation analysis. Through the comparisons with recent reported methods on a large multi-center dataset comprising over 1800 cases, we demonstrated the superiority of the proposed CMTLNet, with the mean Matthews correlation coefficient in validation and test sets improved by (4.1%, 10.7%), (3.6%, 23.4%), and (2.7%, 22.7%) respectively for glioma grading, 1p/19q and IDH status prediction tasks. In addition, we found that some radiomic features are highly related to uninterpretable deep features and that their variation trends are consistent in multi-center datasets, which can be taken as reliable imaging biomarkers for glioma diagnosis. The proposed CMTLNet provides an interpretable tool for glioma multi-task prediction, which is beneficial for glioma precise diagnosis and personalized treatment.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Medical image analysis
Medical image analysis 工程技术-工程:生物医学
CiteScore
22.10
自引率
6.40%
发文量
309
审稿时长
6.6 months
期刊介绍: Medical Image Analysis serves as a platform for sharing new research findings in the realm of medical and biological image analysis, with a focus on applications of computer vision, virtual reality, and robotics to biomedical imaging challenges. The journal prioritizes the publication of high-quality, original papers contributing to the fundamental science of processing, analyzing, and utilizing medical and biological images. It welcomes approaches utilizing biomedical image datasets across all spatial scales, from molecular/cellular imaging to tissue/organ imaging.
期刊最新文献
Corrigendum to "Detection and analysis of cerebral aneurysms based on X-ray rotational angiography - the CADA 2020 challenge" [Medical Image Analysis, April 2022, Volume 77, 102333]. Editorial for Special Issue on Foundation Models for Medical Image Analysis. Few-shot medical image segmentation with high-fidelity prototypes. The Developing Human Connectome Project: A fast deep learning-based pipeline for neonatal cortical surface reconstruction. SAF-IS: A spatial annotation free framework for instance segmentation of surgical tools
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1