Kaicong Sun, Yuanwang Zhang, Jiameng Liu, Ling Yu, Yan Zhou, Fang Xie, Qihao Guo, Han Zhang, Qian Wang, Dinggang Shen
{"title":"通过生成式人工智能,仅使用单模态图像实现多模态脑疾病诊断性能","authors":"Kaicong Sun, Yuanwang Zhang, Jiameng Liu, Ling Yu, Yan Zhou, Fang Xie, Qihao Guo, Han Zhang, Qian Wang, Dinggang Shen","doi":"10.1038/s44172-024-00245-w","DOIUrl":null,"url":null,"abstract":"Brain disease diagnosis using multiple imaging modalities has shown superior performance compared to using single modality, yet multi-modal data is not easily available in clinical routine due to cost or radiation risk. Here we propose a synthesis-empowered uncertainty-aware classification framework for brain disease diagnosis. To synthesize disease-relevant features effectively, a two-stage framework is proposed including multi-modal feature representation learning and representation transfer based on hierarchical similarity matching. Besides, the synthesized and acquired modality features are integrated based on evidential learning, which provides diagnosis decision and also diagnosis uncertainty. Our framework is extensively evaluated on five datasets containing 3758 subjects for three brain diseases including Alzheimer’s disease (AD), subcortical vascular mild cognitive impairment (MCI), and O[6]-methylguanine-DNA methyltransferase promoter methylation status for glioblastoma, achieving 0.950 and 0.806 in area under the ROC curve on ADNI dataset for discriminating AD patients from normal controls and progressive MCI from static MCI, respectively. Our framework not only achieves quasi-multimodal performance although using single-modal input, but also provides reliable diagnosis uncertainty. Kaicong Sun and colleagues design a generative uncertainty-aware AI framework to facilitate brain disease diagnosis using single-modal input. Validated on five datasets comprising thousands of subjects, this approach shows promising results close to using multi-modal input across three types of brain disease.","PeriodicalId":72644,"journal":{"name":"Communications engineering","volume":" ","pages":"1-13"},"PeriodicalIF":0.0000,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s44172-024-00245-w.pdf","citationCount":"0","resultStr":"{\"title\":\"Achieving multi-modal brain disease diagnosis performance using only single-modal images through generative AI\",\"authors\":\"Kaicong Sun, Yuanwang Zhang, Jiameng Liu, Ling Yu, Yan Zhou, Fang Xie, Qihao Guo, Han Zhang, Qian Wang, Dinggang Shen\",\"doi\":\"10.1038/s44172-024-00245-w\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Brain disease diagnosis using multiple imaging modalities has shown superior performance compared to using single modality, yet multi-modal data is not easily available in clinical routine due to cost or radiation risk. Here we propose a synthesis-empowered uncertainty-aware classification framework for brain disease diagnosis. To synthesize disease-relevant features effectively, a two-stage framework is proposed including multi-modal feature representation learning and representation transfer based on hierarchical similarity matching. Besides, the synthesized and acquired modality features are integrated based on evidential learning, which provides diagnosis decision and also diagnosis uncertainty. Our framework is extensively evaluated on five datasets containing 3758 subjects for three brain diseases including Alzheimer’s disease (AD), subcortical vascular mild cognitive impairment (MCI), and O[6]-methylguanine-DNA methyltransferase promoter methylation status for glioblastoma, achieving 0.950 and 0.806 in area under the ROC curve on ADNI dataset for discriminating AD patients from normal controls and progressive MCI from static MCI, respectively. Our framework not only achieves quasi-multimodal performance although using single-modal input, but also provides reliable diagnosis uncertainty. Kaicong Sun and colleagues design a generative uncertainty-aware AI framework to facilitate brain disease diagnosis using single-modal input. Validated on five datasets comprising thousands of subjects, this approach shows promising results close to using multi-modal input across three types of brain disease.\",\"PeriodicalId\":72644,\"journal\":{\"name\":\"Communications engineering\",\"volume\":\" \",\"pages\":\"1-13\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.nature.com/articles/s44172-024-00245-w.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Communications engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.nature.com/articles/s44172-024-00245-w\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Communications engineering","FirstCategoryId":"1085","ListUrlMain":"https://www.nature.com/articles/s44172-024-00245-w","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Achieving multi-modal brain disease diagnosis performance using only single-modal images through generative AI
Brain disease diagnosis using multiple imaging modalities has shown superior performance compared to using single modality, yet multi-modal data is not easily available in clinical routine due to cost or radiation risk. Here we propose a synthesis-empowered uncertainty-aware classification framework for brain disease diagnosis. To synthesize disease-relevant features effectively, a two-stage framework is proposed including multi-modal feature representation learning and representation transfer based on hierarchical similarity matching. Besides, the synthesized and acquired modality features are integrated based on evidential learning, which provides diagnosis decision and also diagnosis uncertainty. Our framework is extensively evaluated on five datasets containing 3758 subjects for three brain diseases including Alzheimer’s disease (AD), subcortical vascular mild cognitive impairment (MCI), and O[6]-methylguanine-DNA methyltransferase promoter methylation status for glioblastoma, achieving 0.950 and 0.806 in area under the ROC curve on ADNI dataset for discriminating AD patients from normal controls and progressive MCI from static MCI, respectively. Our framework not only achieves quasi-multimodal performance although using single-modal input, but also provides reliable diagnosis uncertainty. Kaicong Sun and colleagues design a generative uncertainty-aware AI framework to facilitate brain disease diagnosis using single-modal input. Validated on five datasets comprising thousands of subjects, this approach shows promising results close to using multi-modal input across three types of brain disease.