通过生成式人工智能,仅使用单模态图像实现多模态脑疾病诊断性能

Kaicong Sun, Yuanwang Zhang, Jiameng Liu, Ling Yu, Yan Zhou, Fang Xie, Qihao Guo, Han Zhang, Qian Wang, Dinggang Shen
{"title":"通过生成式人工智能,仅使用单模态图像实现多模态脑疾病诊断性能","authors":"Kaicong Sun, Yuanwang Zhang, Jiameng Liu, Ling Yu, Yan Zhou, Fang Xie, Qihao Guo, Han Zhang, Qian Wang, Dinggang Shen","doi":"10.1038/s44172-024-00245-w","DOIUrl":null,"url":null,"abstract":"Brain disease diagnosis using multiple imaging modalities has shown superior performance compared to using single modality, yet multi-modal data is not easily available in clinical routine due to cost or radiation risk. Here we propose a synthesis-empowered uncertainty-aware classification framework for brain disease diagnosis. To synthesize disease-relevant features effectively, a two-stage framework is proposed including multi-modal feature representation learning and representation transfer based on hierarchical similarity matching. Besides, the synthesized and acquired modality features are integrated based on evidential learning, which provides diagnosis decision and also diagnosis uncertainty. Our framework is extensively evaluated on five datasets containing 3758 subjects for three brain diseases including Alzheimer’s disease (AD), subcortical vascular mild cognitive impairment (MCI), and O[6]-methylguanine-DNA methyltransferase promoter methylation status for glioblastoma, achieving 0.950 and 0.806 in area under the ROC curve on ADNI dataset for discriminating AD patients from normal controls and progressive MCI from static MCI, respectively. Our framework not only achieves quasi-multimodal performance although using single-modal input, but also provides reliable diagnosis uncertainty. Kaicong Sun and colleagues design a generative uncertainty-aware AI framework to facilitate brain disease diagnosis using single-modal input. Validated on five datasets comprising thousands of subjects, this approach shows promising results close to using multi-modal input across three types of brain disease.","PeriodicalId":72644,"journal":{"name":"Communications engineering","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s44172-024-00245-w.pdf","citationCount":"0","resultStr":"{\"title\":\"Achieving multi-modal brain disease diagnosis performance using only single-modal images through generative AI\",\"authors\":\"Kaicong Sun, Yuanwang Zhang, Jiameng Liu, Ling Yu, Yan Zhou, Fang Xie, Qihao Guo, Han Zhang, Qian Wang, Dinggang Shen\",\"doi\":\"10.1038/s44172-024-00245-w\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Brain disease diagnosis using multiple imaging modalities has shown superior performance compared to using single modality, yet multi-modal data is not easily available in clinical routine due to cost or radiation risk. Here we propose a synthesis-empowered uncertainty-aware classification framework for brain disease diagnosis. To synthesize disease-relevant features effectively, a two-stage framework is proposed including multi-modal feature representation learning and representation transfer based on hierarchical similarity matching. Besides, the synthesized and acquired modality features are integrated based on evidential learning, which provides diagnosis decision and also diagnosis uncertainty. Our framework is extensively evaluated on five datasets containing 3758 subjects for three brain diseases including Alzheimer’s disease (AD), subcortical vascular mild cognitive impairment (MCI), and O[6]-methylguanine-DNA methyltransferase promoter methylation status for glioblastoma, achieving 0.950 and 0.806 in area under the ROC curve on ADNI dataset for discriminating AD patients from normal controls and progressive MCI from static MCI, respectively. Our framework not only achieves quasi-multimodal performance although using single-modal input, but also provides reliable diagnosis uncertainty. Kaicong Sun and colleagues design a generative uncertainty-aware AI framework to facilitate brain disease diagnosis using single-modal input. Validated on five datasets comprising thousands of subjects, this approach shows promising results close to using multi-modal input across three types of brain disease.\",\"PeriodicalId\":72644,\"journal\":{\"name\":\"Communications engineering\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.nature.com/articles/s44172-024-00245-w.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Communications engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.nature.com/articles/s44172-024-00245-w\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Communications engineering","FirstCategoryId":"1085","ListUrlMain":"https://www.nature.com/articles/s44172-024-00245-w","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

与使用单一成像模式相比,使用多种成像模式进行脑疾病诊断显示出更优越的性能,但由于成本或辐射风险等原因,多模式数据在临床常规工作中并不容易获得。在此,我们提出了一种综合赋能的不确定性感知分类框架,用于脑疾病诊断。为了有效合成疾病相关特征,我们提出了一个两阶段框架,包括多模态特征表征学习和基于分层相似性匹配的表征转移。此外,基于证据学习对合成和获取的模态特征进行整合,从而提供诊断决策和诊断不确定性。我们的框架在包含 3758 名受试者的五个数据集上进行了广泛评估,这些数据集涉及三种脑部疾病,包括阿尔茨海默病(AD)、皮层下血管性轻度认知障碍(MCI)和胶质母细胞瘤的 O[6]-methylguanine-DNA 甲基转移酶启动子甲基化状态,在 ADNI 数据集上区分 AD 患者和正常对照组以及进行性 MCI 和静态 MCI 的 ROC 曲线下面积分别达到了 0.950 和 0.806。虽然使用的是单模态输入,但我们的框架不仅实现了准多模态性能,还提供了可靠的诊断不确定性。孙开聪及其同事设计了一个生成式不确定性感知人工智能框架,利用单模态输入促进脑疾病诊断。该方法在由数千名受试者组成的五个数据集上得到验证,在三种类型的脑疾病中显示出接近使用多模态输入的良好效果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Achieving multi-modal brain disease diagnosis performance using only single-modal images through generative AI
Brain disease diagnosis using multiple imaging modalities has shown superior performance compared to using single modality, yet multi-modal data is not easily available in clinical routine due to cost or radiation risk. Here we propose a synthesis-empowered uncertainty-aware classification framework for brain disease diagnosis. To synthesize disease-relevant features effectively, a two-stage framework is proposed including multi-modal feature representation learning and representation transfer based on hierarchical similarity matching. Besides, the synthesized and acquired modality features are integrated based on evidential learning, which provides diagnosis decision and also diagnosis uncertainty. Our framework is extensively evaluated on five datasets containing 3758 subjects for three brain diseases including Alzheimer’s disease (AD), subcortical vascular mild cognitive impairment (MCI), and O[6]-methylguanine-DNA methyltransferase promoter methylation status for glioblastoma, achieving 0.950 and 0.806 in area under the ROC curve on ADNI dataset for discriminating AD patients from normal controls and progressive MCI from static MCI, respectively. Our framework not only achieves quasi-multimodal performance although using single-modal input, but also provides reliable diagnosis uncertainty. Kaicong Sun and colleagues design a generative uncertainty-aware AI framework to facilitate brain disease diagnosis using single-modal input. Validated on five datasets comprising thousands of subjects, this approach shows promising results close to using multi-modal input across three types of brain disease.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A platform-agnostic deep reinforcement learning framework for effective Sim2Real transfer towards autonomous driving Cryogenic quantum computer control signal generation using high-electron-mobility transistors A semi-transparent thermoelectric glazing nanogenerator with aluminium doped zinc oxide and copper iodide thin films Towards a general computed tomography image segmentation model for anatomical structures and lesions 5 G new radio fiber-wireless converged systems by injection locking multi-optical carrier into directly-modulated lasers
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1