Pub Date : 2020-10-01DOI: 10.1007/978-3-030-59354-4_13
Nicolas Honnorat, Adolf Pfefferbaum, Edith V Sullivan, Kilian M Pohl
Functional connectivity between brain regions is often estimated by correlating brain activity measured by resting-state fMRI in those regions. The impact of factors (e.g, disorder or substance use) are then modeled by their effects on these correlation matrices in individuals. A crucial step in better understanding their effects on brain function could lie in estimating connectomes, which encode the correlation matrices across subjects. Connectomes are mostly estimated by creating a single average for a specific cohort, which works well for binary factors (such as sex) but is unsuited for continuous ones, such as alcohol consumption. Alternative approaches based on regression methods usually model each pair of regions separately, which generally produces incoherent connectomes as correlations across multiple regions contradict each other. In this work, we address these issues by introducing a deep learning model that predicts connectomes based on factor values. The predictions are defined on a simplex spanned across correlation matrices, whose convex combination guarantees that the deep learning model generates well-formed connectomes. We present an efficient method for creating these simplexes and improve the accuracy of the entire analysis by defining loss functions based on robust norms. We show that our deep learning approach is able to produce accurate models on challenging synthetic data. Furthermore, we apply the approach to the resting-state fMRI scans of 281 subjects to study the effect of sex, alcohol, and HIV on brain function.
大脑区域之间的功能连通性通常是通过对这些区域的静息态 fMRI 所测量的大脑活动进行相关性估算得出的。然后,根据各种因素(如失调或药物使用)对这些相关矩阵的影响来模拟这些因素对个体的影响。要想更好地了解这些因素对大脑功能的影响,关键的一步在于估算连接组(connectomes)。连接组的估计方法大多是为特定人群创建一个单一的平均值,这对二元因素(如性别)很有效,但对连续因素(如饮酒量)则不适用。基于回归方法的替代方法通常对每对区域分别建模,这通常会产生不连贯的连接组,因为多个区域之间的相关性相互矛盾。在这项工作中,我们通过引入一个深度学习模型来解决这些问题,该模型可根据因子值预测连接组。预测值定义在跨相关矩阵的单纯形上,它们的凸组合保证了深度学习模型能生成形式良好的连接组。我们提出了创建这些单纯形的有效方法,并通过定义基于稳健规范的损失函数提高了整个分析的准确性。我们的研究表明,我们的深度学习方法能够在具有挑战性的合成数据上生成准确的模型。此外,我们还将该方法应用于 281 名受试者的静息态 fMRI 扫描,以研究性、酒精和 HIV 对大脑功能的影响。
{"title":"Deep Parametric Mixtures for Modeling the Functional Connectome.","authors":"Nicolas Honnorat, Adolf Pfefferbaum, Edith V Sullivan, Kilian M Pohl","doi":"10.1007/978-3-030-59354-4_13","DOIUrl":"10.1007/978-3-030-59354-4_13","url":null,"abstract":"<p><p>Functional connectivity between brain regions is often estimated by correlating brain activity measured by resting-state fMRI in those regions. The impact of factors (e.g, disorder or substance use) are then modeled by their effects on these correlation matrices in individuals. A crucial step in better understanding their effects on brain function could lie in estimating connectomes, which encode the correlation matrices across subjects. Connectomes are mostly estimated by creating a single average for a specific cohort, which works well for binary factors (such as sex) but is unsuited for continuous ones, such as alcohol consumption. Alternative approaches based on regression methods usually model each pair of regions separately, which generally produces incoherent connectomes as correlations across multiple regions contradict each other. In this work, we address these issues by introducing a deep learning model that predicts connectomes based on factor values. The predictions are defined on a simplex spanned across correlation matrices, whose convex combination guarantees that the deep learning model generates well-formed connectomes. We present an efficient method for creating these simplexes and improve the accuracy of the entire analysis by defining loss functions based on robust norms. We show that our deep learning approach is able to produce accurate models on challenging synthetic data. Furthermore, we apply the approach to the resting-state fMRI scans of 281 subjects to study the effect of sex, alcohol, and HIV on brain function.</p>","PeriodicalId":92572,"journal":{"name":"PRedictive Intelligence in MEdicine. PRIME (Workshop)","volume":"12329 ","pages":"133-143"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7643933/pdf/nihms-1636596.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38583059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-01DOI: 10.1007/978-3-030-59354-4_9
Rafi Ayub, Qingyu Zhao, M J Meloy, Edith V Sullivan, Adolf Pfefferbaum, Ehsan Adeli, Kilian M Pohl
Minor artifacts introduced during image acquisition are often negligible to the human eye, such as a confined field of view resulting in MRI missing the top of the head. This cropping artifact, however, can cause suboptimal processing of the MRI resulting in data omission or decreasing the power of subsequent analyses. We propose to avoid data or quality loss by restoring these missing regions of the head via variational autoencoders (VAE), a deep generative model that has been previously applied to high resolution image reconstruction. Based on diffusion weighted images (DWI) acquired by the National Consortium on Alcohol and Neurodevelopment in Adolescence (NCANDA), we evaluate the accuracy of inpainting the top of the head by common autoencoder models (U-Net, VQVAE, and VAE-GAN) and a custom model proposed herein called U-VQVAE. Our results show that U-VQVAE not only achieved the highest accuracy, but also resulted in MRI processing producing lower fractional anisotropy (FA) in the supplementary motor area than FA derived from the original MRIs. Lower FA implies that inpainting reduces noise in processing DWI and thus increase the quality of the generated results. The code is available at https://github.com/RdoubleA/DWIinpainting.
{"title":"Inpainting Cropped Diffusion MRI using Deep Generative Models.","authors":"Rafi Ayub, Qingyu Zhao, M J Meloy, Edith V Sullivan, Adolf Pfefferbaum, Ehsan Adeli, Kilian M Pohl","doi":"10.1007/978-3-030-59354-4_9","DOIUrl":"https://doi.org/10.1007/978-3-030-59354-4_9","url":null,"abstract":"<p><p>Minor artifacts introduced during image acquisition are often negligible to the human eye, such as a confined field of view resulting in MRI missing the top of the head. This cropping artifact, however, can cause suboptimal processing of the MRI resulting in data omission or decreasing the power of subsequent analyses. We propose to avoid data or quality loss by restoring these missing regions of the head via variational autoencoders (VAE), a deep generative model that has been previously applied to high resolution image reconstruction. Based on diffusion weighted images (DWI) acquired by the National Consortium on Alcohol and Neurodevelopment in Adolescence (NCANDA), we evaluate the accuracy of inpainting the top of the head by common autoencoder models (U-Net, VQVAE, and VAE-GAN) and a custom model proposed herein called U-VQVAE. Our results show that U-VQVAE not only achieved the highest accuracy, but also resulted in MRI processing producing lower fractional anisotropy (FA) in the supplementary motor area than FA derived from the original MRIs. Lower FA implies that inpainting reduces noise in processing DWI and thus increase the quality of the generated results. The code is available at https://github.com/RdoubleA/DWIinpainting.</p>","PeriodicalId":92572,"journal":{"name":"PRedictive Intelligence in MEdicine. PRIME (Workshop)","volume":"12329 ","pages":"91-100"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8123091/pdf/nihms-1698575.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39001076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.1007/978-3-030-59354-4
I. Rekik, E. Adeli, Sang Hyun Park, M. Hernández
{"title":"Predictive Intelligence in Medicine: Third International Workshop, PRIME 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 8, 2020, Proceedings","authors":"I. Rekik, E. Adeli, Sang Hyun Park, M. Hernández","doi":"10.1007/978-3-030-59354-4","DOIUrl":"https://doi.org/10.1007/978-3-030-59354-4","url":null,"abstract":"","PeriodicalId":92572,"journal":{"name":"PRedictive Intelligence in MEdicine. PRIME (Workshop)","volume":"83 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89672926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-13DOI: 10.1007/978-3-030-32281-6_19
Antoine Rivail, U. Schmidt-Erfurth, Wolf-Dieter Vogl, S. Waldstein, Sophie Riedl, C. Grechenig, Zhichao Wu, H. Bogunović
{"title":"Correction to: Modeling Disease Progression in Retinal OCTs with Longitudinal Self-supervised Learning","authors":"Antoine Rivail, U. Schmidt-Erfurth, Wolf-Dieter Vogl, S. Waldstein, Sophie Riedl, C. Grechenig, Zhichao Wu, H. Bogunović","doi":"10.1007/978-3-030-32281-6_19","DOIUrl":"https://doi.org/10.1007/978-3-030-32281-6_19","url":null,"abstract":"","PeriodicalId":92572,"journal":{"name":"PRedictive Intelligence in MEdicine. PRIME (Workshop)","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81119118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01Epub Date: 2019-10-10DOI: 10.1007/978-3-030-32281-6_1
Răzvan V Marinescu, Neil P Oxtoby, Alexandra L Young, Esther E Bron, Arthur W Toga, Michael W Weiner, Frederik Barkhof, Nick C Fox, Polina Golland, Stefan Klein, Daniel C Alexander
The Alzheimer's Disease Prediction Of Longitudinal Evolution (TADPOLE) Challenge compares the performance of algorithms at predicting the future evolution of individuals at risk of Alzheimer's disease. TADPOLE Challenge participants train their models and algorithms on historical data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) study. Participants are then required to make forecasts of three key outcomes for ADNI-3 rollover participants: clinical diagnosis, Alzheimer's Disease Assessment Scale Cognitive Subdomain (ADAS-Cog 13), and total volume of the ventricles - which are then compared with future measurements. Strong points of the challenge are that the test data did not exist at the time of forecasting (it was acquired afterwards), and that it focuses on the challenging problem of cohort selection for clinical trials by identifying fast progressors. The submission phase of TADPOLE was open until 15 November 2017; since then data has been acquired until April 2019 from 219 subjects with 223 clinical visits and 150 Magnetic Resonance Imaging (MRI) scans, which was used for the evaluation of the participants' predictions. Thirty-three teams participated with a total of 92 submissions. No single submission was best at predicting all three outcomes. For diagnosis prediction, the best forecast (team Frog), which was based on gradient boosting, obtained a multiclass area under the receiver-operating curve (MAUC) of 0.931, while for ventricle prediction the best forecast (team EMC1 ), which was based on disease progression modelling and spline regression, obtained mean absolute error of 0.41% of total intracranial volume (ICV). For ADAS-Cog 13, no forecast was considerably better than the benchmark mixed effects model (BenchmarkME ), provided to participants before the submission deadline. Further analysis can help understand which input features and algorithms are most suitable for Alzheimer's disease prediction and for aiding patient stratification in clinical trials. The submission system remains open via the website: https://tadpole.grand-challenge.org/.
{"title":"TADPOLE Challenge: Accurate Alzheimer's disease prediction through crowdsourced forecasting of future data.","authors":"Răzvan V Marinescu, Neil P Oxtoby, Alexandra L Young, Esther E Bron, Arthur W Toga, Michael W Weiner, Frederik Barkhof, Nick C Fox, Polina Golland, Stefan Klein, Daniel C Alexander","doi":"10.1007/978-3-030-32281-6_1","DOIUrl":"https://doi.org/10.1007/978-3-030-32281-6_1","url":null,"abstract":"<p><p>The Alzheimer's Disease Prediction Of Longitudinal Evolution (TADPOLE) Challenge compares the performance of algorithms at predicting the future evolution of individuals at risk of Alzheimer's disease. TADPOLE Challenge participants train their models and algorithms on historical data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) study. Participants are then required to make forecasts of three key outcomes for ADNI-3 rollover participants: clinical diagnosis, Alzheimer's Disease Assessment Scale Cognitive Subdomain (ADAS-Cog 13), and total volume of the ventricles - which are then compared with future measurements. Strong points of the challenge are that the test data did not exist at the time of forecasting (it was acquired afterwards), and that it focuses on the challenging problem of cohort selection for clinical trials by identifying fast progressors. The submission phase of TADPOLE was open until 15 November 2017; since then data has been acquired until April 2019 from 219 subjects with 223 clinical visits and 150 Magnetic Resonance Imaging (MRI) scans, which was used for the evaluation of the participants' predictions. Thirty-three teams participated with a total of 92 submissions. No single submission was best at predicting all three outcomes. For diagnosis prediction, the best forecast (team Frog), which was based on gradient boosting, obtained a multiclass area under the receiver-operating curve (MAUC) of 0.931, while for ventricle prediction the best forecast (team <i>EMC1</i> ), which was based on disease progression modelling and spline regression, obtained mean absolute error of 0.41% of total intracranial volume (ICV). For ADAS-Cog 13, no forecast was considerably better than the benchmark mixed effects model (<i>BenchmarkME</i> ), provided to participants before the submission deadline. Further analysis can help understand which input features and algorithms are most suitable for Alzheimer's disease prediction and for aiding patient stratification in clinical trials. The submission system remains open via the website: https://tadpole.grand-challenge.org/.</p>","PeriodicalId":92572,"journal":{"name":"PRedictive Intelligence in MEdicine. PRIME (Workshop)","volume":"11843 ","pages":"1-10"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7315046/pdf/nihms-1586281.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38087038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-01DOI: 10.1007/978-3-030-32281-6
I. Rekik, E. Adeli, Sang Hyun Park
{"title":"Predictive Intelligence in Medicine: Second International Workshop, PRIME 2019, Held in Conjunction with MICCAI 2019, Shenzhen, China, October 13, 2019, Proceedings","authors":"I. Rekik, E. Adeli, Sang Hyun Park","doi":"10.1007/978-3-030-32281-6","DOIUrl":"https://doi.org/10.1007/978-3-030-32281-6","url":null,"abstract":"","PeriodicalId":92572,"journal":{"name":"PRedictive Intelligence in MEdicine. PRIME (Workshop)","volume":"454 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76064952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent studies have shown that fusing multi-modal neuroimaging data can improve the performance of Alzheimer's Disease (AD) diagnosis. However, most existing methods simply concatenate features from each modality without appropriate consideration of the correlations among multi-modalities. Besides, existing methods often employ feature selection (or fusion) and classifier training in two independent steps without consideration of the fact that the two pipelined steps are highly related to each other. Furthermore, existing methods that make prediction based on a single classifier may not be able to address the heterogeneity of the AD progression. To address these issues, we propose a novel AD diagnosis framework based on latent space learning with ensemble classifiers, by integrating the latent representation learning and ensemble of multiple diversified classifiers learning into a unified framework. To this end, we first project the neuroimaging data from different modalities into a common latent space, and impose a joint sparsity constraint on the concatenated projection matrices. Then, we map the learned latent representations into the label space to learn multiple diversified classifiers and aggregate their predictions to obtain the final classification result. Experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset show that our method outperforms other state-of-the-art methods.
{"title":"Multi-modal Neuroimaging Data Fusion via Latent Space Learning for Alzheimer's Disease Diagnosis.","authors":"Tao Zhou, Kim-Han Thung, Mingxia Liu, Feng Shi, Changqing Zhang, Dinggang Shen","doi":"10.1007/978-3-030-00320-3_10","DOIUrl":"https://doi.org/10.1007/978-3-030-00320-3_10","url":null,"abstract":"<p><p>Recent studies have shown that fusing multi-modal neuroimaging data can improve the performance of Alzheimer's Disease (AD) diagnosis. However, most existing methods simply concatenate features from each modality without appropriate consideration of the correlations among multi-modalities. Besides, existing methods often employ feature selection (or fusion) and classifier training in two independent steps without consideration of the fact that the two pipelined steps are highly related to each other. Furthermore, existing methods that make prediction based on a single classifier may not be able to address the heterogeneity of the AD progression. To address these issues, we propose a novel AD diagnosis framework based on latent space learning with ensemble classifiers, by integrating the latent representation learning and ensemble of multiple diversified classifiers learning into a unified framework. To this end, we first project the neuroimaging data from different modalities into a common latent space, and impose a joint sparsity constraint on the concatenated projection matrices. Then, we map the learned latent representations into the label space to learn multiple diversified classifiers and aggregate their predictions to obtain the final classification result. Experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset show that our method outperforms other state-of-the-art methods.</p>","PeriodicalId":92572,"journal":{"name":"PRedictive Intelligence in MEdicine. PRIME (Workshop)","volume":"11121 ","pages":"76-84"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-030-00320-3_10","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36578648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01Epub Date: 2018-09-13DOI: 10.1007/978-3-030-00320-3_2
Juntang Zhuang, Nicha C Dvornek, Xiaoxiao Li, Pamela Ventola, James S Duncan
Autism spectrum disorder (ASD) is a complex neurodevelop-mental syndrome. Early diagnosis and precise treatment are essential for ASD patients. Although researchers have built many analytical models, there has been limited progress in accurate predictive models for early diagnosis. In this project, we aim to build an accurate model to predict treatment outcome and ASD severity from early stage functional magnetic resonance imaging (fMRI) scans. The difficulty in building large databases of patients who have received specific treatments and the high dimensionality of medical image analysis problems are challenges in this work. We propose a generic and accurate two-level approach for high-dimensional regression problems in medical image analysis. First, we perform region-level feature selection using a predefined brain parcellation. Based on the assumption that voxels within one region in the brain have similar values, for each region we use the bootstrapped mean of voxels within it as a feature. In this way, the dimension of data is reduced from number of voxels to number of regions. Then we detect predictive regions by various feature selection methods. Second, we extract voxels within selected regions, and perform voxel-level feature selection. To use this model in both linear and non-linear cases with limited training examples, we apply two-level elastic net regression and random forest (RF) models respectively. To validate accuracy and robustness of this approach, we perform experiments on both task-fMRI and resting state fMRI datasets. Furthermore, we visualize the influence of each region, and show that the results match well with other findings.
{"title":"Prediction of severity and treatment outcome for ASD from fMRI.","authors":"Juntang Zhuang, Nicha C Dvornek, Xiaoxiao Li, Pamela Ventola, James S Duncan","doi":"10.1007/978-3-030-00320-3_2","DOIUrl":"10.1007/978-3-030-00320-3_2","url":null,"abstract":"<p><p>Autism spectrum disorder (ASD) is a complex neurodevelop-mental syndrome. Early diagnosis and precise treatment are essential for ASD patients. Although researchers have built many analytical models, there has been limited progress in accurate predictive models for early diagnosis. In this project, we aim to build an accurate model to predict treatment outcome and ASD severity from early stage functional magnetic resonance imaging (fMRI) scans. The difficulty in building large databases of patients who have received specific treatments and the high dimensionality of medical image analysis problems are challenges in this work. We propose a generic and accurate two-level approach for high-dimensional regression problems in medical image analysis. First, we perform region-level feature selection using a predefined brain parcellation. Based on the assumption that voxels within one region in the brain have similar values, for each region we use the bootstrapped mean of voxels within it as a feature. In this way, the dimension of data is reduced from number of voxels to number of regions. Then we detect predictive regions by various feature selection methods. Second, we extract voxels within selected regions, and perform voxel-level feature selection. To use this model in both linear and non-linear cases with limited training examples, we apply two-level elastic net regression and random forest (RF) models respectively. To validate accuracy and robustness of this approach, we perform experiments on both task-fMRI and resting state fMRI datasets. Furthermore, we visualize the influence of each region, and show that the results match well with other findings.</p>","PeriodicalId":92572,"journal":{"name":"PRedictive Intelligence in MEdicine. PRIME (Workshop)","volume":"11121 ","pages":"9-17"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-030-00320-3_2","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38427728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-01-01Epub Date: 2018-09-13DOI: 10.1007/978-3-030-00320-3_7
Kim-Han Thung, Pew-Thian Yap, Dinggang Shen
It is vital to identify Mild Cognitive Impairment (MCI) subjects who will progress to Alzheimer's Disease (AD), so that early treatment can be administered. Recent studies show that using complementary information from multi-modality data may improve the model performance of the above prediction problem. However, multi-modality data is often incomplete, causing the prediction models that rely on complete data unusable. One way to deal with this issue is by first imputing the missing values, and then building a classifier based on the completed data. This two-step approach, however, may generate non-optimal classifier output, as the errors of the imputation may propagate to the classifier during training. To address this issue, we propose a unified framework that jointly performs feature selection, data denoising, missing values imputation, and classifier learning. To this end, we use a low-rank constraint to impute the missing values and denoise the data simultaneously, while using a regression model for feature selection and classification. The feature weights learned by the regression model are integrated into the low rank formulation to focus on discriminative features when denoising and imputing data, while the resulting low-rank matrix is used for classifier learning. These two components interact and correct each other iteratively using Alternating Direction Method of Multiplier (ADMM). The experimental results using incomplete multi-modality ADNI dataset shows that our proposed method outperforms other comparison methods.
{"title":"Joint Robust Imputation and Classification for Early Dementia Detection Using Incomplete Multi-modality Data.","authors":"Kim-Han Thung, Pew-Thian Yap, Dinggang Shen","doi":"10.1007/978-3-030-00320-3_7","DOIUrl":"10.1007/978-3-030-00320-3_7","url":null,"abstract":"<p><p>It is vital to identify Mild Cognitive Impairment (MCI) subjects who will progress to Alzheimer's Disease (AD), so that early treatment can be administered. Recent studies show that using complementary information from multi-modality data may improve the model performance of the above prediction problem. However, multi-modality data is often incomplete, causing the prediction models that rely on complete data unusable. One way to deal with this issue is by first imputing the missing values, and then building a classifier based on the completed data. This two-step approach, however, may generate non-optimal classifier output, as the errors of the imputation may propagate to the classifier during training. To address this issue, we propose a unified framework that jointly performs feature selection, data denoising, missing values imputation, and classifier learning. To this end, we use a low-rank constraint to impute the missing values and denoise the data simultaneously, while using a regression model for feature selection and classification. The feature weights learned by the regression model are integrated into the low rank formulation to focus on discriminative features when denoising and imputing data, while the resulting low-rank matrix is used for classifier learning. These two components interact and correct each other iteratively using Alternating Direction Method of Multiplier (ADMM). The experimental results using incomplete multi-modality ADNI dataset shows that our proposed method outperforms other comparison methods.</p>","PeriodicalId":92572,"journal":{"name":"PRedictive Intelligence in MEdicine. PRIME (Workshop)","volume":"11121 ","pages":"51-59"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8386184/pdf/nihms-1710613.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39357217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}