Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/isbi56570.2024.10635249
Lina Takemaru, Shu Yang, Ruiming Wu, Bing He, Christos Davtzikos, Jingwen Yan, Li Shen
Alzheimer's Disease (AD) is a neurodegenerative disorder characterized by progressive cognitive degeneration and motor impairment, affecting millions worldwide. Mapping the progression of AD is crucial for early detection of loss of brain function, timely intervention, and development of effective treatments. However, accurate measurements of disease progression are still challenging at present. This study presents a novel approach to understanding the heterogeneous pathways of AD through longitudinal biomarker data from medical imaging and other modalities. We propose an analytical pipeline adopting two popular machine learning methods from the single-cell transcriptomics domain, PHATE and Slingshot, to project multimodal biomarker trajectories to a low-dimensional space. These embeddings serve as our pseudotime estimates. We applied this pipeline to the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset to align longitudinal data across individuals at various disease stages. Our approach mirrors the technique used to cluster single-cell data into cell types based on developmental timelines. Our pseudotime estimates revealed distinct patterns of disease evolution and biomarker changes over time, providing a deeper understanding of the temporal dynamics of AD. The results show the potential of the approach in the clinical domain of neurodegenerative diseases, enabling more precise disease modeling and early diagnosis.
阿尔茨海默病(AD)是一种神经退行性疾病,以进行性认知退化和运动障碍为特征,影响着全球数百万人。绘制阿尔茨海默病的进展图对于早期发现大脑功能丧失、及时干预和开发有效的治疗方法至关重要。然而,目前对疾病进展的精确测量仍具有挑战性。本研究提出了一种新方法,通过医学影像和其他模式的纵向生物标记物数据来了解注意力缺失症的异质性途径。我们提出了一种分析管道,采用单细胞转录组学领域两种流行的机器学习方法 PHATE 和 Slingshot,将多模态生物标记物轨迹投射到低维空间。这些嵌入作为我们的伪时间估计。我们将这一管道应用于阿尔茨海默病神经影像倡议(ADNI)数据集,对处于不同疾病阶段的个体的纵向数据进行对齐。我们的方法与根据发育时间表将单细胞数据聚类为细胞类型的技术如出一辙。我们的伪时间估算揭示了疾病演变和生物标志物随时间变化的独特模式,为深入了解 AD 的时间动态提供了依据。研究结果表明,这种方法在神经退行性疾病的临床领域具有潜力,可以实现更精确的疾病建模和早期诊断。
{"title":"MAPPING ALZHEIMER'S DISEASE PSEUDO-PROGRESSION WITH MULTIMODAL BIOMARKER TRAJECTORY EMBEDDINGS.","authors":"Lina Takemaru, Shu Yang, Ruiming Wu, Bing He, Christos Davtzikos, Jingwen Yan, Li Shen","doi":"10.1109/isbi56570.2024.10635249","DOIUrl":"10.1109/isbi56570.2024.10635249","url":null,"abstract":"<p><p>Alzheimer's Disease (AD) is a neurodegenerative disorder characterized by progressive cognitive degeneration and motor impairment, affecting millions worldwide. Mapping the progression of AD is crucial for early detection of loss of brain function, timely intervention, and development of effective treatments. However, accurate measurements of disease progression are still challenging at present. This study presents a novel approach to understanding the heterogeneous pathways of AD through longitudinal biomarker data from medical imaging and other modalities. We propose an analytical pipeline adopting two popular machine learning methods from the single-cell transcriptomics domain, PHATE and Slingshot, to project multimodal biomarker trajectories to a low-dimensional space. These embeddings serve as our pseudotime estimates. We applied this pipeline to the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset to align longitudinal data across individuals at various disease stages. Our approach mirrors the technique used to cluster single-cell data into cell types based on developmental timelines. Our pseudotime estimates revealed distinct patterns of disease evolution and biomarker changes over time, providing a deeper understanding of the temporal dynamics of AD. The results show the potential of the approach in the clinical domain of neurodegenerative diseases, enabling more precise disease modeling and early diagnosis.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11452153/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/isbi56570.2024.10635895
Chi Zhang, Omer Burak Demirel, Mehmet Akçakaya
Physics-driven deep learning (PD-DL) has become a powerful tool for accelerated MRI. Recent developments have also developed unsupervised learning for PD-DL, including self-supervised learning. However, at very high acceleration rates, such approaches show performance deterioration. In this study, we propose to use cyclic-consistency (CC) to improve self-supervised learning for highly accelerated MRI. In our proposed CC, simulated measurements are obtained by undersampling the network output using patterns drawn from the same distribution as the true one. The reconstructions of these simulated measurements are obtained using the same network, which are then compared to the acquired data at the true sampling locations. This CC approach is used in conjunction with a masking-based self-supervised loss. Results show that the proposed method can substantially reduce aliasing artifacts at high acceleration rates, including rate 6 and 8 fastMRI knee imaging and 20-fold HCP-style fMRI.
{"title":"CYCLE-CONSISTENT SELF-SUPERVISED LEARNING FOR IMPROVED HIGHLY-ACCELERATED MRI RECONSTRUCTION.","authors":"Chi Zhang, Omer Burak Demirel, Mehmet Akçakaya","doi":"10.1109/isbi56570.2024.10635895","DOIUrl":"10.1109/isbi56570.2024.10635895","url":null,"abstract":"<p><p>Physics-driven deep learning (PD-DL) has become a powerful tool for accelerated MRI. Recent developments have also developed unsupervised learning for PD-DL, including self-supervised learning. However, at very high acceleration rates, such approaches show performance deterioration. In this study, we propose to use cyclic-consistency (CC) to improve self-supervised learning for highly accelerated MRI. In our proposed CC, simulated measurements are obtained by undersampling the network output using patterns drawn from the same distribution as the true one. The reconstructions of these simulated measurements are obtained using the same network, which are then compared to the acquired data at the true sampling locations. This CC approach is used in conjunction with a masking-based self-supervised loss. Results show that the proposed method can substantially reduce aliasing artifacts at high acceleration rates, including rate 6 and 8 fastMRI knee imaging and 20-fold HCP-style fMRI.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11736014/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143017995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/isbi56570.2024.10635712
Jun-En Ding, Chien-Chin Hsu, Feng Liu
Parkinson's Disease (PD) affects millions globally, impacting movement. Prior research utilized deep learning for PD prediction, primarily focusing on medical images, neglecting the data's underlying manifold structure. This work proposes a multimodal approach encompassing both image and non-image features, leveraging contrastive cross-view graph fusion for PD classification. We introduce a novel multimodal co-attention module, integrating embeddings from separate graph views derived from low-dimensional representations of images and clinical features. This enables more robust and structured feature extraction for improved multi-view data analysis. Additionally, a simplified contrastive loss-based fusion method is devised to enhance cross-view fusion learning. Our graph-view multimodal approach achieves an accuracy of 91% and an area under the receiver operating characteristic curve (AUC) of 92.8% in five-fold cross-validation. It also demonstrates superior predictive capabilities on non-image data compared to solely machine learning-based methods.
{"title":"PARKINSON'S DISEASE CLASSIFICATION USING CONTRASTIVE GRAPH CROSS-VIEW LEARNING WITH MULTIMODAL FUSION OF SPECT IMAGES AND CLINICAL FEATURES.","authors":"Jun-En Ding, Chien-Chin Hsu, Feng Liu","doi":"10.1109/isbi56570.2024.10635712","DOIUrl":"https://doi.org/10.1109/isbi56570.2024.10635712","url":null,"abstract":"<p><p>Parkinson's Disease (PD) affects millions globally, impacting movement. Prior research utilized deep learning for PD prediction, primarily focusing on medical images, neglecting the data's underlying manifold structure. This work proposes a multimodal approach encompassing both image and non-image features, leveraging contrastive cross-view graph fusion for PD classification. We introduce a novel multimodal co-attention module, integrating embeddings from separate graph views derived from low-dimensional representations of images and clinical features. This enables more robust and structured feature extraction for improved multi-view data analysis. Additionally, a simplified contrastive loss-based fusion method is devised to enhance cross-view fusion learning. Our graph-view multimodal approach achieves an accuracy of 91% and an area under the receiver operating characteristic curve (AUC) of 92.8% in five-fold cross-validation. It also demonstrates superior predictive capabilities on non-image data compared to solely machine learning-based methods.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11467967/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142482684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/isbi56570.2024.10635541
Maryam Akhavan Aghdam, Serdar Bozdag, Fahad Saeed
Alzheimer's disease (AD) is a neurodegenerative disorder, and timely diagnosis is crucial for early interventions. AD is known to have disruptive local and global brain neural connections that may be instrumental in understanding and extracting specific biomarkers. Existing machine-learning approaches are mostly based on convolutional neural network (CNN) and standard vision transformer (ViT) models, which may not sufficiently capture the multidimensional local and global patterns indicative of AD. Therefore, in this paper, we propose a novel approach called PVTAD to classify AD and cognitively normal (CN) cases using pretrained pyramid vision transformer (PVT) and white matter (WM) of T1-weighted structural MRI (sMRI) data. Our approach combines the advantages of CNN and standard ViT to extract both local and global features indicative of AD from the WM coronal middle slices. We performed experiments on subjects with T1-weighed MPRAGE sMRI scans from the ADNI dataset. Our results demonstrate that the PVTAD achieves an average accuracy of 97.7% and an F1-score of 97.6%, outperforming the single and parallel CNN and standard ViT based on sMRI data for AD vs. CN classification. Our code is available at https://github.com/pcdslab/PVTAD.
{"title":"PVTAD: ALZHEIMER'S DISEASE DIAGNOSIS USING PYRAMID VISION TRANSFORMER APPLIED TO WHITE MATTER OF T1-WEIGHTED STRUCTURAL MRI DATA.","authors":"Maryam Akhavan Aghdam, Serdar Bozdag, Fahad Saeed","doi":"10.1109/isbi56570.2024.10635541","DOIUrl":"10.1109/isbi56570.2024.10635541","url":null,"abstract":"<p><p>Alzheimer's disease (AD) is a neurodegenerative disorder, and timely diagnosis is crucial for early interventions. AD is known to have disruptive local and global brain neural connections that may be instrumental in understanding and extracting specific biomarkers. Existing machine-learning approaches are mostly based on convolutional neural network (CNN) and standard vision transformer (ViT) models, which may not sufficiently capture the multidimensional local and global patterns indicative of AD. Therefore, in this paper, we propose a novel approach called <i>PVTAD</i> to classify AD and cognitively normal (CN) cases using pretrained pyramid vision transformer (PVT) and white matter (WM) of T1-weighted structural MRI (sMRI) data. Our approach combines the advantages of CNN and standard ViT to extract both local and global features indicative of AD from the WM coronal middle slices. We performed experiments on subjects with T1-weighed MPRAGE sMRI scans from the ADNI dataset. Our results demonstrate that the PVTAD achieves an average accuracy of 97.7% and an F1-score of 97.6%, outperforming the single and parallel CNN and standard ViT based on sMRI data for AD vs. CN classification. Our code is available at https://github.com/pcdslab/PVTAD.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11877309/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143560274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/isbi56570.2024.10635179
Jyothi Rikhab Chand, Mathews Jacob
End-to-End (E2E) unrolled optimization frameworks show promise for Magnetic Resonance (MR) image recovery, but suffer from high memory usage during training. In addition, these deterministic approaches do not offer opportunities for sampling from the posterior distribution. In this paper, we introduce a memory-efficient approach for E2E learning of the posterior distribution. We represent this distribution as the combination of a data-consistency-induced likelihood term and an energy model for the prior, parameterized by a Convolutional Neural Network (CNN). The CNN weights are learned from training data in an E2E fashion using maximum likelihood optimization. The learned model enables the recovery of images from undersampled measurements using the Maximum A Posteriori (MAP) optimization. In addition, the posterior model can be sampled to derive uncertainty maps about the reconstruction. Experiments on parallel MR image reconstruction show that our approach performs comparable to the memory-intensive E2E unrolled algorithm, performs better than its memory-efficient counterpart, and can provide uncertainty maps. Our framework paves the way towards MR image reconstruction in 3D and higher dimensions.
{"title":"MEMORY-EFFICIENT DEEP END-TO-END POSTERIOR NETWORK (DEEPEN) FOR INVERSE PROBLEMS.","authors":"Jyothi Rikhab Chand, Mathews Jacob","doi":"10.1109/isbi56570.2024.10635179","DOIUrl":"https://doi.org/10.1109/isbi56570.2024.10635179","url":null,"abstract":"<p><p>End-to-End (E2E) unrolled optimization frameworks show promise for Magnetic Resonance (MR) image recovery, but suffer from high memory usage during training. In addition, these deterministic approaches do not offer opportunities for sampling from the posterior distribution. In this paper, we introduce a memory-efficient approach for E2E learning of the posterior distribution. We represent this distribution as the combination of a data-consistency-induced likelihood term and an energy model for the prior, parameterized by a Convolutional Neural Network (CNN). The CNN weights are learned from training data in an E2E fashion using maximum likelihood optimization. The learned model enables the recovery of images from undersampled measurements using the Maximum A Posteriori (MAP) optimization. In addition, the posterior model can be sampled to derive uncertainty maps about the reconstruction. Experiments on parallel MR image reconstruction show that our approach performs comparable to the memory-intensive E2E unrolled algorithm, performs better than its memory-efficient counterpart, and can provide uncertainty maps. Our framework paves the way towards MR image reconstruction in 3D and higher dimensions.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12381932/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144981885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/isbi56570.2024.10635270
Hongwei Bran Li, Matthew S Rosen, Shahin Nasr, Juan Eugenio Iglesias
High-resolution fMRI provides a window into the brain's mesoscale organization. Yet, higher spatial resolution increases scan times, to compensate for the low signal and contrast-to-noise ratio. This work introduces a deep learning-based 3D super-resolution (SR) method for fMRI. By incorporating a resolution-agnostic image augmentation framework, our method adapts to varying voxel sizes without retraining. We apply this innovative technique to localize fine-scale motion-selective sites in the early visual areas. Detection of these sites typically requires ≤ 1mm isotropic data, whereas here, we visualize them based on lower resolution (2-3mm isotropic) fMRI data. Remarkably, the super-resolved fMRI is able to recover high-frequency detail of the interdigitated organization of these sites (relative to the color-selective sites), even with training data sourced from different subjects and experimental paradigms - including non-visual resting-state fMRI, underscoring its robustness and versatility. Quantitative and qualitative results indicate that our method has the potential to enhance the spatial resolution of fMRI, leading to a drastic reduction in acquisition time.
{"title":"RESOLUTION- AND STIMULUS-AGNOSTIC SUPER-RESOLUTION OF ULTRA-HIGH-FIELD FUNCTIONAL MRI: APPLICATION TO VISUAL STUDIES.","authors":"Hongwei Bran Li, Matthew S Rosen, Shahin Nasr, Juan Eugenio Iglesias","doi":"10.1109/isbi56570.2024.10635270","DOIUrl":"https://doi.org/10.1109/isbi56570.2024.10635270","url":null,"abstract":"<p><p>High-resolution fMRI provides a window into the brain's mesoscale organization. Yet, higher spatial resolution increases scan times, to compensate for the low signal and contrast-to-noise ratio. This work introduces a deep learning-based 3D super-resolution (SR) method for fMRI. By incorporating a resolution-agnostic image augmentation framework, our method adapts to varying voxel sizes <i>without retraining</i>. We apply this innovative technique to localize fine-scale motion-selective sites in the early visual areas. Detection of these sites typically requires ≤ 1mm isotropic data, whereas here, we visualize them based on lower resolution (2-3mm isotropic) fMRI data. Remarkably, the super-resolved fMRI is able to recover high-frequency detail of the interdigitated organization of these sites (relative to the color-selective sites), even with training data sourced from different subjects and experimental paradigms - including non-visual resting-state fMRI, underscoring its robustness and versatility. Quantitative and qualitative results indicate that our method has the potential to enhance the spatial resolution of fMRI, leading to a drastic reduction in acquisition time.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12376370/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144981913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/isbi56570.2024.10635581
Zixuan Wen, Jingxuan Bao, Shu Yang, Junhao Wen, Qipeng Zhan, Yuhan Cui, Guray Erus, Zhijian Yang, Paul M Thompson, Yize Zhao, Christos Davatzikos, Li Shen
Morphometricity examines the global statistical association between brain morphology and an observable trait, and is defined as the proportion of the trait variation attributable to brain morphology. In this work, we propose an accurate morphometricity estimator based on the generalized random effects (GRE) model, and perform morphometricity analyses on five cognitive traits in an Alzheimer's study. Our empirical study shows that the proposed GRE model outperforms the widely used LME model on both simulation and real data. In addition, we extend morphometricity estimation from the whole brain to the focal-brain level, and examine and quantify both global and regional neuroanatomical signatures of the cognitive traits. Our global analysis reveals 1) a relatively strong anatomical basis for ADAS13, 2) intermediate ones for MMSE, CDRSB and FAQ, and 3) a relatively weak one for RAVLT.learning. The top associations identified from our regional morphometricity analysis include those between all five cognitive traits and multiple regions such as hippocampus, amygdala, and inferior lateral ventricles. As expected, the identified regional associations are weaker than the global ones. While the whole brain analysis is more powerful in identifying higher morphometricity, the regional analysis could localize the neuroanatomical signatures of the studied cognitive traits and thus provide valuable information in imaging and cognitive biomarker discovery for normal and/or disordered brain research.
形态计量学研究大脑形态与可观察特质之间的整体统计关联,其定义是大脑形态在特质变异中所占的比例。在这项工作中,我们提出了一种基于广义随机效应(GRE)模型的精确形态计量学估计方法,并在一项阿尔茨海默氏症研究中对五种认知特质进行了形态计量学分析。我们的实证研究表明,所提出的 GRE 模型在模拟和真实数据上都优于广泛使用的 LME 模型。此外,我们还将形态计量估计从全脑扩展到了局灶脑水平,并对认知特征的全局和区域神经解剖特征进行了研究和量化。我们的全局分析表明:1)ADAS13 的解剖学基础相对较强;2)MMSE、CDRSB 和 FAQ 的解剖学基础居中;3)RAVLT.learning 的解剖学基础相对较弱。从我们的区域形态计量学分析中发现的首要关联包括所有五个认知特质与多个区域(如海马、杏仁核和下侧脑室)之间的关联。不出所料,区域关联弱于整体关联。虽然全脑分析在识别更高的形态计量学方面更强大,但区域分析可以定位所研究认知特征的神经解剖特征,从而为正常和/或失调大脑研究的成像和认知生物标记物发现提供有价值的信息。
{"title":"MULTISCALE ESTIMATION OF MORPHOMETRICITY FOR REVEALING NEUROANATOMICAL BASIS OF COGNITIVE TRAITS.","authors":"Zixuan Wen, Jingxuan Bao, Shu Yang, Junhao Wen, Qipeng Zhan, Yuhan Cui, Guray Erus, Zhijian Yang, Paul M Thompson, Yize Zhao, Christos Davatzikos, Li Shen","doi":"10.1109/isbi56570.2024.10635581","DOIUrl":"10.1109/isbi56570.2024.10635581","url":null,"abstract":"<p><p>Morphometricity examines the global statistical association between brain morphology and an observable trait, and is defined as the proportion of the trait variation attributable to brain morphology. In this work, we propose an accurate morphometricity estimator based on the generalized random effects (GRE) model, and perform morphometricity analyses on five cognitive traits in an Alzheimer's study. Our empirical study shows that the proposed GRE model outperforms the widely used LME model on both simulation and real data. In addition, we extend morphometricity estimation from the whole brain to the focal-brain level, and examine and quantify both global and regional neuroanatomical signatures of the cognitive traits. Our global analysis reveals 1) a relatively strong anatomical basis for ADAS13, 2) intermediate ones for MMSE, CDRSB and FAQ, and 3) a relatively weak one for RAVLT.learning. The top associations identified from our regional morphometricity analysis include those between all five cognitive traits and multiple regions such as hippocampus, amygdala, and inferior lateral ventricles. As expected, the identified regional associations are weaker than the global ones. While the whole brain analysis is more powerful in identifying higher morphometricity, the regional analysis could localize the neuroanatomical signatures of the studied cognitive traits and thus provide valuable information in imaging and cognitive biomarker discovery for normal and/or disordered brain research.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11452152/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/isbi56570.2024.10635530
Merve Gülle, Mehmet Akçakaya
Real-time dynamic MRI is important for visualizing time-varying processes in several applications, including cardiac imaging, where it enables free-breathing images of the beating heart without ECG gating. However, current real-time MRI techniques commonly face challenges in achieving the required spatio-temporal resolutions due to limited acceleration rates. In this study, we propose a deep learning (DL) technique for improving the estimation of stationary outer-volume signal from shifted time-interleaved undersampling patterns. Our approach utilizes the pseudo-periodic nature of the ghosting artifacts arising from the moving organs. Subsequently, this estimated outer-volume signal is subtracted from individual timeframes of the real-time MR time series, and each timeframe is reconstructed individually using physics-driven DL methods. Results show improved image quality at high acceleration rates, where conventional methods fail.
{"title":"ROBUST OUTER VOLUME SUBTRACTION WITH DEEP LEARNING GHOSTING DETECTION FOR HIGHLY-ACCELERATED REAL-TIME DYNAMIC MRI.","authors":"Merve Gülle, Mehmet Akçakaya","doi":"10.1109/isbi56570.2024.10635530","DOIUrl":"10.1109/isbi56570.2024.10635530","url":null,"abstract":"<p><p>Real-time dynamic MRI is important for visualizing time-varying processes in several applications, including cardiac imaging, where it enables free-breathing images of the beating heart without ECG gating. However, current real-time MRI techniques commonly face challenges in achieving the required spatio-temporal resolutions due to limited acceleration rates. In this study, we propose a deep learning (DL) technique for improving the estimation of stationary outer-volume signal from shifted time-interleaved undersampling patterns. Our approach utilizes the pseudo-periodic nature of the ghosting artifacts arising from the moving organs. Subsequently, this estimated outer-volume signal is subtracted from individual timeframes of the real-time MR time series, and each timeframe is reconstructed individually using physics-driven DL methods. Results show improved image quality at high acceleration rates, where conventional methods fail.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11742269/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143017997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/isbi56570.2024.10635579
Muhammad Ahmad Sultan, Chong Chen, Yingmin Liu, Xuan Lei, Rizwan Ahmad
High-quality training data are not always available in dynamic MRI. To address this, we propose a self-supervised deep learning method called deep image prior with structured sparsity (DISCUS) for reconstructing dynamic images. DISCUS is inspired by deep image prior (DIP) and recovers a series of images through joint optimization of network parameters and input code vectors. However, DISCUS additionally encourages group sparsity on frame-specific code vectors to discover the low-dimensional manifold that describes temporal variations across frames. Compared to prior work on manifold learning, DISCUS does not require specifying the manifold dimensionality. We validate DISCUS using three numerical studies. In the first study, we simulate a dynamic Shepp-Logan phantom with frames undergoing random rotations, translations, or both, and demonstrate that DISCUS can discover the dimensionality of the underlying manifold. In the second study, we use data from a realistic late gadolinium enhancement (LGE) phantom to compare DISCUS with compressed sensing (CS) and DIP, and to demonstrate the positive impact of group sparsity. In the third study, we use retrospectively undersampled single-shot LGE data from five patients to compare DISCUS with CS reconstructions. The results from these studies demonstrate that DISCUS outperforms CS and DIP, and that enforcing group sparsity on the code vectors helps discover true manifold dimensionality and provides additional performance gain.
{"title":"DEEP IMAGE PRIOR WITH STRUCTURED SPARSITY (DISCUS) FOR DYNAMIC MRI RECONSTRUCTION.","authors":"Muhammad Ahmad Sultan, Chong Chen, Yingmin Liu, Xuan Lei, Rizwan Ahmad","doi":"10.1109/isbi56570.2024.10635579","DOIUrl":"10.1109/isbi56570.2024.10635579","url":null,"abstract":"<p><p>High-quality training data are not always available in dynamic MRI. To address this, we propose a self-supervised deep learning method called deep image prior with structured sparsity (DISCUS) for reconstructing dynamic images. DISCUS is inspired by deep image prior (DIP) and recovers a series of images through joint optimization of network parameters and input code vectors. However, DISCUS additionally encourages group sparsity on frame-specific code vectors to discover the low-dimensional manifold that describes temporal variations across frames. Compared to prior work on manifold learning, DISCUS does not require specifying the manifold dimensionality. We validate DISCUS using three numerical studies. In the first study, we simulate a dynamic Shepp-Logan phantom with frames undergoing random rotations, translations, or both, and demonstrate that DISCUS can discover the dimensionality of the underlying manifold. In the second study, we use data from a realistic late gadolinium enhancement (LGE) phantom to compare DISCUS with compressed sensing (CS) and DIP, and to demonstrate the positive impact of group sparsity. In the third study, we use retrospectively undersampled single-shot LGE data from five patients to compare DISCUS with CS reconstructions. The results from these studies demonstrate that DISCUS outperforms CS and DIP, and that enforcing group sparsity on the code vectors helps discover true manifold dimensionality and provides additional performance gain.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12063720/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144024834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-01Epub Date: 2024-08-22DOI: 10.1109/isbi56570.2024.10635382
Xuan Lei, Philip Schniter, Chong Chen, Muhammad Ahmad Sultan, Rizwan Ahmad
Modern MRI scanners utilize one or more arrays of small receive-only coils to collect k-space data. The sensitivity maps of the coils, when estimated using traditional methods, differ from the true sensitivity maps, which are generally unknown. Consequently, the reconstructed MR images exhibit undesired spatial variation in intensity. These intensity variations can be at least partially corrected using pre-scan data. In this work, we propose an intensity correction method that utilizes pre-scan data. For demonstration, we apply our method to a digital phantom, as well as to cardiac MRI data collected from a commercial scanner by Siemens Healthineers. The code is available at https://github.com/OSU-MR/SCC.
{"title":"SURFACE COIL INTENSITY CORRECTION FOR MRI.","authors":"Xuan Lei, Philip Schniter, Chong Chen, Muhammad Ahmad Sultan, Rizwan Ahmad","doi":"10.1109/isbi56570.2024.10635382","DOIUrl":"10.1109/isbi56570.2024.10635382","url":null,"abstract":"<p><p>Modern MRI scanners utilize one or more arrays of small receive-only coils to collect k-space data. The sensitivity maps of the coils, when estimated using traditional methods, differ from the true sensitivity maps, which are generally unknown. Consequently, the reconstructed MR images exhibit undesired spatial variation in intensity. These intensity variations can be at least partially corrected using pre-scan data. In this work, we propose an intensity correction method that utilizes pre-scan data. For demonstration, we apply our method to a digital phantom, as well as to cardiac MRI data collected from a commercial scanner by Siemens Healthineers. The code is available at https://github.com/OSU-MR/SCC.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12063721/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144014693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}