Magnetic resonance fingerprinting (MRF) is a novel quantitative imaging technique that allows simultaneous measurements of multiple important tissue properties in human body, e.g., T1 and T2 relaxation times. While MRF has demonstrated better scan efficiency as compared to conventional quantitative imaging techniques, further acceleration is desired, especially for certain subjects such as infants and young children. However, the conventional MRF framework only uses a simple template matching algorithm to quantify tissue properties, without considering the underlying spatial association among pixels in MRF signals. In this work, we aim to accelerate MRF acquisition by developing a new post-processing method that allows accurate quantification of tissue properties with fewer sampling data. Moreover, to improve the accuracy in quantification, the MRF signals from multiple surrounding pixels are used together to better estimate tissue properties at the central target pixel, which was simply done with the signal only from the target pixel in the original template matching method. In particular, a deep learning model, i.e., U-Net, is used to learn the mapping from the MRF signal evolutions to the tissue property map. To further reduce the network size of U-Net, principal component analysis (PCA) is used to reduce the dimensionality of the input signals. Based on in vivo brain data, our method can achieve accurate quantification for both T1 and T2 by using only 25% time points, which are four times of acceleration in data acquisition compared to the original template matching method.
{"title":"Deep Learning for Fast and Spatially-Constrained Tissue Quantification from Highly-Undersampled Data in Magnetic Resonance Fingerprinting (MRF).","authors":"Zhenghan Fang, Yong Chen, Mingxia Liu, Yiqiang Zhan, Weili Lin, Dinggang Shen","doi":"10.1007/978-3-030-00919-9_46","DOIUrl":"https://doi.org/10.1007/978-3-030-00919-9_46","url":null,"abstract":"<p><p>Magnetic resonance fingerprinting (MRF) is a novel quantitative imaging technique that allows simultaneous measurements of multiple important tissue properties in human body, e.g., T1 and T2 relaxation times. While MRF has demonstrated better scan efficiency as compared to conventional quantitative imaging techniques, further acceleration is desired, especially for certain subjects such as infants and young children. However, the conventional MRF framework only uses a simple template matching algorithm to quantify tissue properties, without considering the underlying spatial association among pixels in MRF signals. In this work, we aim to accelerate MRF acquisition by developing a new post-processing method that allows accurate quantification of tissue properties with <i>fewer</i> sampling data. Moreover, to improve the accuracy in quantification, the MRF signals from multiple surrounding pixels are used together to better estimate tissue properties at the central target pixel, which was simply done with the signal only from the target pixel in the original template matching method. In particular, a deep learning model, i.e., U-Net, is used to learn the mapping from the MRF signal evolutions to the tissue property map. To further reduce the network size of U-Net, principal component analysis (PCA) is used to reduce the dimensionality of the input signals. Based on <i>in vivo</i> brain data, our method can achieve accurate quantification for both T1 and T2 by using only 25% time points, which are <i>four times</i> of acceleration in data acquisition compared to the original template matching method.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"11046 ","pages":"398-405"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-030-00919-9_46","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37106173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-01-01DOI: 10.1007/978-3-030-00919-9
Yinghuan Shi, Heung-Il Suk, Mingxia Liu
{"title":"Machine Learning in Medical Imaging: 9th International Workshop, MLMI 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, Proceedings","authors":"Yinghuan Shi, Heung-Il Suk, Mingxia Liu","doi":"10.1007/978-3-030-00919-9","DOIUrl":"https://doi.org/10.1007/978-3-030-00919-9","url":null,"abstract":"","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82233297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01Epub Date: 2017-09-07DOI: 10.1007/978-3-319-67389-9_43
Dmitry Petrov, Boris A Gutman, Shih-Hua Julie Yu, Theo G M van Erp, Jessica A Turner, Lianne Schmaal, Dick Veltman, Lei Wang, Kathryn Alpert, Dmitry Isaev, Artemis Zavaliangos-Petropulu, Christopher R K Ching, Vince Calhoun, David Glahn, Theodore D Satterthwaite, Ole Andreas Andreasen, Stefan Borgwardt, Fleur Howells, Nynke Groenewold, Aristotle Voineskos, Joaquim Radua, Steven G Potkin, Benedicto Crespo-Facorro, Diana Tordesillas-Gutiérrez, Li Shen, Irina Lebedeva, Gianfranco Spalletta, Gary Donohoe, Peter Kochunov, Pedro G P Rosa, Anthony James, Udo Dannlowski, Bernhard T Baune, André Aleman, Ian H Gotlib, Henrik Walter, Martin Walter, Jair C Soares, Stefan Ehrlich, Ruben C Gur, N Trung Doan, Ingrid Agartz, Lars T Westlye, Fabienne Harrisberger, Anita Riecher-Rössler, Anne Uhlmann, Dan J Stein, Erin W Dickie, Edith Pomarol-Clotet, Paola Fuentes-Claramonte, Erick Jorge Canales-Rodríguez, Raymond Salvador, Alexander J Huang, Roberto Roiz-Santiañez, Shan Cong, Alexander Tomyshev, Fabrizio Piras, Daniela Vecchio, Nerisa Banaj, Valentina Ciullo, Elliot Hong, Geraldo Busatto, Marcus V Zanetti, Mauricio H Serpa, Simon Cervenka, Sinead Kelly, Dominik Grotegerd, Matthew D Sacchet, Ilya M Veer, Meng Li, Mon-Ju Wu, Benson Irungu, Esther Walton, Paul M Thompson
As very large studies of complex neuroimaging phenotypes become more common, human quality assessment of MRI-derived data remains one of the last major bottlenecks. Few attempts have so far been made to address this issue with machine learning. In this work, we optimize predictive models of quality for meshes representing deep brain structure shapes. We use standard vertex-wise and global shape features computed homologously across 19 cohorts and over 7500 human-rated subjects, training kernelized Support Vector Machine and Gradient Boosted Decision Trees classifiers to detect meshes of failing quality. Our models generalize across datasets and diseases, reducing human workload by 30-70%, or equivalently hundreds of human rater hours for datasets of comparable size, with recall rates approaching inter-rater reliability.
{"title":"Machine Learning for Large-Scale Quality Control of 3D Shape Models in Neuroimaging.","authors":"Dmitry Petrov, Boris A Gutman, Shih-Hua Julie Yu, Theo G M van Erp, Jessica A Turner, Lianne Schmaal, Dick Veltman, Lei Wang, Kathryn Alpert, Dmitry Isaev, Artemis Zavaliangos-Petropulu, Christopher R K Ching, Vince Calhoun, David Glahn, Theodore D Satterthwaite, Ole Andreas Andreasen, Stefan Borgwardt, Fleur Howells, Nynke Groenewold, Aristotle Voineskos, Joaquim Radua, Steven G Potkin, Benedicto Crespo-Facorro, Diana Tordesillas-Gutiérrez, Li Shen, Irina Lebedeva, Gianfranco Spalletta, Gary Donohoe, Peter Kochunov, Pedro G P Rosa, Anthony James, Udo Dannlowski, Bernhard T Baune, André Aleman, Ian H Gotlib, Henrik Walter, Martin Walter, Jair C Soares, Stefan Ehrlich, Ruben C Gur, N Trung Doan, Ingrid Agartz, Lars T Westlye, Fabienne Harrisberger, Anita Riecher-Rössler, Anne Uhlmann, Dan J Stein, Erin W Dickie, Edith Pomarol-Clotet, Paola Fuentes-Claramonte, Erick Jorge Canales-Rodríguez, Raymond Salvador, Alexander J Huang, Roberto Roiz-Santiañez, Shan Cong, Alexander Tomyshev, Fabrizio Piras, Daniela Vecchio, Nerisa Banaj, Valentina Ciullo, Elliot Hong, Geraldo Busatto, Marcus V Zanetti, Mauricio H Serpa, Simon Cervenka, Sinead Kelly, Dominik Grotegerd, Matthew D Sacchet, Ilya M Veer, Meng Li, Mon-Ju Wu, Benson Irungu, Esther Walton, Paul M Thompson","doi":"10.1007/978-3-319-67389-9_43","DOIUrl":"10.1007/978-3-319-67389-9_43","url":null,"abstract":"<p><p>As very large studies of complex neuroimaging phenotypes become more common, human quality assessment of MRI-derived data remains one of the last major bottlenecks. Few attempts have so far been made to address this issue with machine learning. In this work, we optimize predictive models of quality for meshes representing deep brain structure shapes. We use standard vertex-wise and global shape features computed homologously across 19 cohorts and over 7500 human-rated subjects, training kernelized Support Vector Machine and Gradient Boosted Decision Trees classifiers to detect meshes of failing quality. Our models generalize across datasets and diseases, reducing human workload by 30-70%, or equivalently hundreds of human rater hours for datasets of comparable size, with recall rates approaching inter-rater reliability.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"10541 ","pages":"371-378"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6049825/pdf/nihms980690.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36334965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sparse representation-based brain network modeling, although popular, often results in relatively large inter-subject variability in network structures. This inevitably makes it difficult for inter-subject comparison, thus eventually deteriorating the generalization capability of personalized disease diagnosis. Accordingly, group sparse representation has been proposed to alleviate such limitation by jointly estimating connectivity weights for all subjects. However, the constructed brain networks based on this method often fail in providing satisfactory separability between the subjects from different groups (e.g., patients vs. normal controls), which will also affect the performance of computer-aided disease diagnosis. Based on the hypothesis that subjects from the same group should have larger similarity in their functional connectivity (FC) patterns than subjects from other groups, we propose an "inter-subject FC similarity-guided" group sparse network modeling method. In this method, we explicitly include the inter-subject FC similarity as a constraint to conduct group-wise FC network modeling, while retaining sufficient between-group differences in the resultant FC networks. This improves the separability of brain functional networks between different groups, thus facilitating better personalized brain disease diagnosis. Specifically, the inter-subject FC similarity is roughly estimated by comparing the Pearson's correlation based FC patterns of each brain region to other regions for each pair of the subjects. Then, this is implemented as an additional weighting term to ensure the adequate inter-subject FC differences between the subjects from different groups. Of note, our method retains the group sparsity constraint to ensure the overall consistency of the resultant individual brain networks. Experimental results show that our method achieves a balanced trade-off by not only generating the individually consistent FC networks, but also effectively maintaining the necessary group difference, thereby significantly improving connectomics-based diagnosis for mild cognitive impairment (MCI).
{"title":"Inter-subject Similarity Guided Brain Network Modeling for MCI Diagnosis.","authors":"Yu Zhang, Han Zhang, Xiaobo Chen, Mingxia Liu, Xiaofeng Zhu, Dinggang Shen","doi":"10.1007/978-3-319-67389-9_20","DOIUrl":"https://doi.org/10.1007/978-3-319-67389-9_20","url":null,"abstract":"<p><p>Sparse representation-based brain network modeling, although popular, often results in relatively large inter-subject variability in network structures. This inevitably makes it difficult for inter-subject comparison, thus eventually deteriorating the generalization capability of personalized disease diagnosis. Accordingly, group sparse representation has been proposed to alleviate such limitation by jointly estimating connectivity weights for all subjects. However, the constructed brain networks based on this method often fail in providing satisfactory separability between the subjects from <i>different groups</i> (e.g., patients <i>vs</i>. normal controls), which will also affect the performance of computer-aided disease diagnosis. Based on the hypothesis that subjects from the same group should have larger similarity in their functional connectivity (FC) patterns than subjects from other groups, we propose an \"inter-subject FC similarity-guided\" group sparse network modeling method. In this method, we explicitly include the inter-subject FC similarity as a constraint to conduct group-wise FC network modeling, while retaining sufficient between-group differences in the resultant FC networks. This improves the separability of brain functional networks between different groups, thus facilitating better personalized brain disease diagnosis. Specifically, the inter-subject FC similarity is roughly estimated by comparing the Pearson's correlation based FC patterns of each brain region to other regions for each pair of the subjects. Then, this is implemented as an additional weighting term to ensure the adequate inter-subject FC differences between the subjects from different groups. Of note, our method retains the group sparsity constraint to ensure the overall consistency of the resultant individual brain networks. Experimental results show that our method achieves a balanced trade-off by <i>not only</i> generating the individually consistent FC networks, <i>but also</i> effectively maintaining the necessary group difference, thereby significantly improving connectomics-based diagnosis for mild cognitive impairment (MCI).</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"10541 ","pages":"168-175"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-67389-9_20","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36585746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01Epub Date: 2017-09-07DOI: 10.1007/978-3-319-67389-9_35
Yang Li, Jingyu Liu, Meilin Luo, Ke Li, Pew-Thian Yap, Minjeong Kim, Chong-Yaw Wee, Dinggang Shen
Recent advances in network modelling techniques have enabled the study of neurological disorders at a whole-brain level based on functional connectivity inferred from resting-state magnetic resonance imaging (rs-fMRI) scan possible. However, constructing a directed effective connectivity, which provides a more comprehensive characterization of functional interactions among the brain regions, is still a challenging task particularly when the ultimate goal is to identify disease associated brain functional interaction anomalies. In this paper, we propose a novel method for inferring effective connectivity from multimodal neuroimaging data for brain disease classification. Specifically, we apply a newly devised weighted sparse regression model on rs-fMRI data to determine the network structure of effective connectivity with the guidance from diffusion tensor imaging (DTI) data. We further employ a regression algorithm to estimate the effective connectivity strengths based on the previously identified network structure. We finally utilize a bagging classifier to evaluate the performance of the proposed sparse effective connectivity network through identifying mild cognitive impairment from healthy aging.
{"title":"Structural Connectivity Guided Sparse Effective Connectivity for MCI Identification.","authors":"Yang Li, Jingyu Liu, Meilin Luo, Ke Li, Pew-Thian Yap, Minjeong Kim, Chong-Yaw Wee, Dinggang Shen","doi":"10.1007/978-3-319-67389-9_35","DOIUrl":"https://doi.org/10.1007/978-3-319-67389-9_35","url":null,"abstract":"<p><p>Recent advances in network modelling techniques have enabled the study of neurological disorders at a whole-brain level based on functional connectivity inferred from resting-state magnetic resonance imaging (rs-fMRI) scan possible. However, constructing a directed effective connectivity, which provides a more comprehensive characterization of functional interactions among the brain regions, is still a challenging task particularly when the ultimate goal is to identify disease associated brain functional interaction anomalies. In this paper, we propose a novel method for inferring effective connectivity from multimodal neuroimaging data for brain disease classification. Specifically, we apply a newly devised weighted sparse regression model on rs-fMRI data to determine the network structure of effective connectivity with the guidance from diffusion tensor imaging (DTI) data. We further employ a regression algorithm to estimate the effective connectivity strengths based on the previously identified network structure. We finally utilize a bagging classifier to evaluate the performance of the proposed sparse effective connectivity network through identifying mild cognitive impairment from healthy aging.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"10541 ","pages":"299-306"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-67389-9_35","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35682031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01Epub Date: 2017-09-07DOI: 10.1007/978-3-319-67389-9_24
Baris U Oguz, Russell T Shinohara, Paul A Yushkevich, Ipek Oguz
Random forests (RF) have long been a widely popular method in medical image analysis. Meanwhile, the closely related gradient boosted trees (GBT) have not become a mainstream tool in medical imaging despite their attractive performance, perhaps due to their computational cost. In this paper, we leverage the recent availability of an efficient open-source GBT implementation to illustrate the GBT method in a corrective learning framework, in application to the segmentation of the caudate nucleus, putamen and hippocampus. The size and shape of these structures are used to derive important biomarkers in many neurological and psychiatric conditions. However, the large variability in deep gray matter appearance makes their automated segmentation from MRI scans a challenging task. We propose using GBT to improve existing segmentation methods. We begin with an existing 'host' segmentation method to create an estimate surface. Based on this estimate, a surface-based sampling scheme is used to construct a set of candidate locations. GBT models are trained on features derived from the candidate locations, including spatial coordinates, image intensity, texture, and gradient magnitude. The classification probabilities from the GBT models are used to calculate a final surface estimate. The method is evaluated on a public dataset, with a 2-fold cross-validation. We use a multi-atlas approach and FreeSurfer as host segmentation methods. The mean reduction in surface distance error metric for FreeSurfer was 0.2 - 0.3 mm, whereas for multi-atlas segmentation, it was 0.1mm for each of caudate, putamen and hippocampus. Importantly, our approach outperformed an RF model trained on the same features (p < 0.05 on all measures). Our method is readily generalizable and can be applied to a wide range of medical image segmentation problems and allows any segmentation method to be used as input.
{"title":"Gradient Boosted Trees for Corrective Learning.","authors":"Baris U Oguz, Russell T Shinohara, Paul A Yushkevich, Ipek Oguz","doi":"10.1007/978-3-319-67389-9_24","DOIUrl":"10.1007/978-3-319-67389-9_24","url":null,"abstract":"<p><p>Random forests (RF) have long been a widely popular method in medical image analysis. Meanwhile, the closely related gradient boosted trees (GBT) have not become a mainstream tool in medical imaging despite their attractive performance, perhaps due to their computational cost. In this paper, we leverage the recent availability of an efficient open-source GBT implementation to illustrate the GBT method in a corrective learning framework, in application to the segmentation of the caudate nucleus, putamen and hippocampus. The size and shape of these structures are used to derive important biomarkers in many neurological and psychiatric conditions. However, the large variability in deep gray matter appearance makes their automated segmentation from MRI scans a challenging task. We propose using GBT to improve existing segmentation methods. We begin with an existing 'host' segmentation method to create an estimate surface. Based on this estimate, a surface-based sampling scheme is used to construct a set of candidate locations. GBT models are trained on features derived from the candidate locations, including spatial coordinates, image intensity, texture, and gradient magnitude. The classification probabilities from the GBT models are used to calculate a final surface estimate. The method is evaluated on a public dataset, with a 2-fold cross-validation. We use a multi-atlas approach and FreeSurfer as host segmentation methods. The mean reduction in surface distance error metric for FreeSurfer was 0.2 - 0.3 mm, whereas for multi-atlas segmentation, it was 0.1mm for each of caudate, putamen and hippocampus. Importantly, our approach outperformed an RF model trained on the same features (<i>p</i> < 0.05 on all measures). Our method is readily generalizable and can be applied to a wide range of medical image segmentation problems and allows any segmentation method to be used as input.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"10541 ","pages":"203-211"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-67389-9_24","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36589945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01Epub Date: 2017-09-07DOI: 10.1007/978-3-319-67389-9_42
Nicha C Dvornek, Pamela Ventola, Kevin A Pelphrey, James S Duncan
Functional magnetic resonance imaging (fMRI) has helped characterize the pathophysiology of autism spectrum disorders (ASD) and carries promise for producing objective biomarkers for ASD. Recent work has focused on deriving ASD biomarkers from resting-state functional connectivity measures. However, current efforts that have identified ASD with high accuracy were limited to homogeneous, small datasets, while classification results for heterogeneous, multi-site data have shown much lower accuracy. In this paper, we propose the use of recurrent neural networks with long short-term memory (LSTMs) for classification of individuals with ASD and typical controls directly from the resting-state fMRI time-series. We used the entire large, multi-site Autism Brain Imaging Data Exchange (ABIDE) I dataset for training and testing the LSTM models. Under a cross-validation framework, we achieved classification accuracy of 68.5%, which is 9% higher than previously reported methods that used fMRI data from the whole ABIDE cohort. Finally, we presented interpretation of the trained LSTM weights, which highlight potential functional networks and regions that are known to be implicated in ASD.
{"title":"Identifying Autism from Resting-State fMRI Using Long Short-Term Memory Networks.","authors":"Nicha C Dvornek, Pamela Ventola, Kevin A Pelphrey, James S Duncan","doi":"10.1007/978-3-319-67389-9_42","DOIUrl":"https://doi.org/10.1007/978-3-319-67389-9_42","url":null,"abstract":"<p><p>Functional magnetic resonance imaging (fMRI) has helped characterize the pathophysiology of autism spectrum disorders (ASD) and carries promise for producing objective biomarkers for ASD. Recent work has focused on deriving ASD biomarkers from resting-state functional connectivity measures. However, current efforts that have identified ASD with high accuracy were limited to homogeneous, small datasets, while classification results for heterogeneous, multi-site data have shown much lower accuracy. In this paper, we propose the use of recurrent neural networks with long short-term memory (LSTMs) for classification of individuals with ASD and typical controls directly from the resting-state fMRI time-series. We used the entire large, multi-site Autism Brain Imaging Data Exchange (ABIDE) I dataset for training and testing the LSTM models. Under a cross-validation framework, we achieved classification accuracy of 68.5%, which is 9% higher than previously reported methods that used fMRI data from the whole ABIDE cohort. Finally, we presented interpretation of the trained LSTM weights, which highlight potential functional networks and regions that are known to be implicated in ASD.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"10541 ","pages":"362-370"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-67389-9_42","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35573223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01Epub Date: 2017-09-07DOI: 10.1007/978-3-319-67389-9_16
Tao Zhou, Kim-Han Thung, Xiaofeng Zhu, Dinggang Shen
In this paper, we aim to maximally utilize multimodality neuroimaging and genetic data to predict Alzheimer's disease (AD) and its prodromal status, i.e., a multi-status dementia diagnosis problem. Multimodality neuroimaging data such as MRI and PET provide valuable insights to abnormalities, and genetic data such as Single Nucleotide Polymorphism (SNP) provide information about a patient's AD risk factors. When used in conjunction, AD diagnosis may be improved. However, these data are heterogeneous (e.g., having different data distributions), and have different number of samples (e.g., PET data is having far less number of samples than the numbers of MRI or SNPs). Thus, learning an effective model using these data is challenging. To this end, we present a novel three-stage deep feature learning and fusion framework , where the deep neural network is trained stage-wise. Each stage of the network learns feature representations for different combination of modalities, via effective training using maximum number of available samples . Specifically, in the first stage, we learn latent representations (i.e., high-level features) for each modality independently, so that the heterogeneity between modalities can be better addressed and then combined in the next stage. In the second stage, we learn the joint latent features for each pair of modality combination by using the high-level features learned from the first stage. In the third stage, we learn the diagnostic labels by fusing the learned joint latent features from the second stage. We have tested our framework on Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset for multi-status AD diagnosis, and the experimental results show that the proposed framework outperforms other methods.
{"title":"Feature Learning and Fusion of Multimodality Neuroimaging and Genetic Data for Multi-status Dementia Diagnosis.","authors":"Tao Zhou, Kim-Han Thung, Xiaofeng Zhu, Dinggang Shen","doi":"10.1007/978-3-319-67389-9_16","DOIUrl":"https://doi.org/10.1007/978-3-319-67389-9_16","url":null,"abstract":"<p><p>In this paper, we aim to maximally utilize multimodality neuroimaging and genetic data to predict Alzheimer's disease (AD) and its prodromal status, i.e., a multi-status dementia diagnosis problem. Multimodality neuroimaging data such as MRI and PET provide valuable insights to abnormalities, and genetic data such as Single Nucleotide Polymorphism (SNP) provide information about a patient's AD risk factors. When used in conjunction, AD diagnosis may be improved. However, these data are heterogeneous (e.g., having different data distributions), and have different number of samples (e.g., PET data is having far less number of samples than the numbers of MRI or SNPs). Thus, learning an effective model using these data is challenging. To this end, we present a novel <b><i>three-stage deep feature learning and fusion framework</i></b> , where the deep neural network is trained stage-wise. Each stage of the network learns feature representations for different combination of modalities, via effective training using <b><i>maximum number of available samples</i></b> . Specifically, in the first stage, we learn latent representations (i.e., high-level features) for each modality independently, so that the heterogeneity between modalities can be better addressed and then combined in the next stage. In the second stage, we learn the joint latent features for each pair of modality combination by using the high-level features learned from the first stage. In the third stage, we learn the diagnostic labels by fusing the learned joint latent features from the second stage. We have tested our framework on Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset for multi-status AD diagnosis, and the experimental results show that the proposed framework outperforms other methods.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"10541 ","pages":"132-140"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-67389-9_16","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35773739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01Epub Date: 2017-09-07DOI: 10.1007/978-3-319-67389-9_36
Yang Li, Jingyu Liu, Ke Li, Pew-Thian Yap, Minjeong Kim, Chong-Yaw Wee, Dinggang Shen
Functional connectivity network derived from resting-state fMRI data has been found as effective biomarkers for identifying patients with mild cognitive impairment from healthy elderly. However, the ordinary functional connectivity network is essentially a low-order network with the assumption that the brain is static during the entire scanning period, ignoring the temporal variations among correlations derived from brain region pairs. To overcome this weakness, we proposed a new type of high-order network to more accurately describe the relationship of temporal variations among brain regions. Specifically, instead of the commonly used undirected pairwise Pearson's correlation coefficient, we first estimated the low-order effective connectivity network based on a novel sparse regression algorithm. By using the similar approach, we then constructed the high-order effective connectivity network from low-order connectivity to incorporate signal flow information among the brain regions. We finally combined the low-order and the high-order effective connectivity networks using two decision trees for MCI classification and experimental results obtained demonstrate the superiority of the proposed method over the conventional undirected low-order and high-order functional connectivity networks, as well as the low-order and high-order effective connectivity networks when they were used separately.
{"title":"Fusion of High-Order and Low-Order Effective Connectivity Networks for MCI Classification.","authors":"Yang Li, Jingyu Liu, Ke Li, Pew-Thian Yap, Minjeong Kim, Chong-Yaw Wee, Dinggang Shen","doi":"10.1007/978-3-319-67389-9_36","DOIUrl":"10.1007/978-3-319-67389-9_36","url":null,"abstract":"<p><p>Functional connectivity network derived from resting-state fMRI data has been found as effective biomarkers for identifying patients with mild cognitive impairment from healthy elderly. However, the ordinary functional connectivity network is essentially a low-order network with the assumption that the brain is static during the entire scanning period, ignoring the temporal variations among correlations derived from brain region pairs. To overcome this weakness, we proposed a new type of high-order network to more accurately describe the relationship of temporal variations among brain regions. Specifically, instead of the commonly used undirected pairwise Pearson's correlation coefficient, we first estimated the low-order effective connectivity network based on a novel sparse regression algorithm. By using the similar approach, we then constructed the high-order effective connectivity network from low-order connectivity to incorporate signal flow information among the brain regions. We finally combined the low-order and the high-order effective connectivity networks using two decision trees for MCI classification and experimental results obtained demonstrate the superiority of the proposed method over the conventional undirected low-order and high-order functional connectivity networks, as well as the low-order and high-order effective connectivity networks when they were used separately.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"2017 ","pages":"307-315"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5999334/pdf/nihms939425.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36230106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-01Epub Date: 2017-09-07DOI: 10.1007/978-3-319-67389-9_18
Pei Dong, Xiaohuan Cao, Jun Zhang, Minjeong Kim, Guorong Wu, Dinggang Shen
Groupwise image registration provides an unbiased registration solution upon a population of images, which can facilitate the subsequent population analysis. However, it is generally computationally expensive for performing groupwise registration on a large set of images. To alleviate this issue, we propose to utilize a fast initialization technique for speeding up the groupwise registration. Our main idea is to generate a set of simulated brain MRI samples with known deformations to their group center. This can be achieved in the training stage by two steps. First, a set of training brain MR images is registered to their group center with a certain existing groupwise registration method. Then, in order to augment the samples, we perform PCA on the set of obtained deformation fields (to the group center) to parameterize the deformation fields. In doing so, we can generate a large number of deformation fields, as well as their respective simulated samples using different parameters for PCA. In the application stage, when given a new set of testing brain MR images, we can mix them with the augmented training samples. Then, for each testing image, we can find its closest sample in the augmented training dataset for fast estimating its deformation field to the group center of the training set. In this way, a tentative group center of the testing image set can be immediately estimated, and the deformation field of each testing image to this estimated group center can be obtained. With this fast initialization for groupwise registration of testing images, we can finally use an existing groupwise registration method to quickly refine the groupwise registration results. Experimental results on ADNI dataset show the significantly improved computational efficiency and competitive registration accuracy, compared to state-of-the-art groupwise registration methods.
{"title":"Efficient Groupwise Registration for Brain MRI by Fast Initialization.","authors":"Pei Dong, Xiaohuan Cao, Jun Zhang, Minjeong Kim, Guorong Wu, Dinggang Shen","doi":"10.1007/978-3-319-67389-9_18","DOIUrl":"https://doi.org/10.1007/978-3-319-67389-9_18","url":null,"abstract":"<p><p>Groupwise image registration provides an unbiased registration solution upon a population of images, which can facilitate the subsequent population analysis. However, it is generally computationally expensive for performing groupwise registration on a large set of images. To alleviate this issue, we propose to utilize a fast initialization technique for speeding up the groupwise registration. Our main idea is to generate a set of simulated brain MRI samples with known deformations to their group center. This can be achieved in the training stage by two steps. First, a set of training brain MR images is registered to their group center with a certain existing groupwise registration method. Then, in order to augment the samples, we perform PCA on the set of obtained deformation fields (to the group center) to parameterize the deformation fields. In doing so, we can generate a large number of deformation fields, as well as their respective simulated samples using different parameters for PCA. In the application stage, when given a new set of testing brain MR images, we can mix them with the augmented training samples. Then, for each testing image, we can find its closest sample in the augmented training dataset for fast estimating its deformation field to the group center of the training set. In this way, a tentative group center of the testing image set can be immediately estimated, and the deformation field of each testing image to this estimated group center can be obtained. With this fast initialization for groupwise registration of testing images, we can finally use an existing groupwise registration method to quickly refine the groupwise registration results. Experimental results on ADNI dataset show the significantly improved computational efficiency and competitive registration accuracy, compared to state-of-the-art groupwise registration methods.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"10541 ","pages":"150-158"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-67389-9_18","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35687492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}