首页 > 最新文献

Proceedings. IEEE International Symposium on Biomedical Imaging最新文献

英文 中文
MAPPING ALZHEIMER'S DISEASE PSEUDO-PROGRESSION WITH MULTIMODAL BIOMARKER TRAJECTORY EMBEDDINGS. 利用多模态生物标记物轨迹嵌入绘制阿尔茨海默病假性进展图。
Pub Date : 2024-05-01 Epub Date: 2024-08-22 DOI: 10.1109/isbi56570.2024.10635249
Lina Takemaru, Shu Yang, Ruiming Wu, Bing He, Christos Davtzikos, Jingwen Yan, Li Shen

Alzheimer's Disease (AD) is a neurodegenerative disorder characterized by progressive cognitive degeneration and motor impairment, affecting millions worldwide. Mapping the progression of AD is crucial for early detection of loss of brain function, timely intervention, and development of effective treatments. However, accurate measurements of disease progression are still challenging at present. This study presents a novel approach to understanding the heterogeneous pathways of AD through longitudinal biomarker data from medical imaging and other modalities. We propose an analytical pipeline adopting two popular machine learning methods from the single-cell transcriptomics domain, PHATE and Slingshot, to project multimodal biomarker trajectories to a low-dimensional space. These embeddings serve as our pseudotime estimates. We applied this pipeline to the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset to align longitudinal data across individuals at various disease stages. Our approach mirrors the technique used to cluster single-cell data into cell types based on developmental timelines. Our pseudotime estimates revealed distinct patterns of disease evolution and biomarker changes over time, providing a deeper understanding of the temporal dynamics of AD. The results show the potential of the approach in the clinical domain of neurodegenerative diseases, enabling more precise disease modeling and early diagnosis.

阿尔茨海默病(AD)是一种神经退行性疾病,以进行性认知退化和运动障碍为特征,影响着全球数百万人。绘制阿尔茨海默病的进展图对于早期发现大脑功能丧失、及时干预和开发有效的治疗方法至关重要。然而,目前对疾病进展的精确测量仍具有挑战性。本研究提出了一种新方法,通过医学影像和其他模式的纵向生物标记物数据来了解注意力缺失症的异质性途径。我们提出了一种分析管道,采用单细胞转录组学领域两种流行的机器学习方法 PHATE 和 Slingshot,将多模态生物标记物轨迹投射到低维空间。这些嵌入作为我们的伪时间估计。我们将这一管道应用于阿尔茨海默病神经影像倡议(ADNI)数据集,对处于不同疾病阶段的个体的纵向数据进行对齐。我们的方法与根据发育时间表将单细胞数据聚类为细胞类型的技术如出一辙。我们的伪时间估算揭示了疾病演变和生物标志物随时间变化的独特模式,为深入了解 AD 的时间动态提供了依据。研究结果表明,这种方法在神经退行性疾病的临床领域具有潜力,可以实现更精确的疾病建模和早期诊断。
{"title":"MAPPING ALZHEIMER'S DISEASE PSEUDO-PROGRESSION WITH MULTIMODAL BIOMARKER TRAJECTORY EMBEDDINGS.","authors":"Lina Takemaru, Shu Yang, Ruiming Wu, Bing He, Christos Davtzikos, Jingwen Yan, Li Shen","doi":"10.1109/isbi56570.2024.10635249","DOIUrl":"10.1109/isbi56570.2024.10635249","url":null,"abstract":"<p><p>Alzheimer's Disease (AD) is a neurodegenerative disorder characterized by progressive cognitive degeneration and motor impairment, affecting millions worldwide. Mapping the progression of AD is crucial for early detection of loss of brain function, timely intervention, and development of effective treatments. However, accurate measurements of disease progression are still challenging at present. This study presents a novel approach to understanding the heterogeneous pathways of AD through longitudinal biomarker data from medical imaging and other modalities. We propose an analytical pipeline adopting two popular machine learning methods from the single-cell transcriptomics domain, PHATE and Slingshot, to project multimodal biomarker trajectories to a low-dimensional space. These embeddings serve as our pseudotime estimates. We applied this pipeline to the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset to align longitudinal data across individuals at various disease stages. Our approach mirrors the technique used to cluster single-cell data into cell types based on developmental timelines. Our pseudotime estimates revealed distinct patterns of disease evolution and biomarker changes over time, providing a deeper understanding of the temporal dynamics of AD. The results show the potential of the approach in the clinical domain of neurodegenerative diseases, enabling more precise disease modeling and early diagnosis.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11452153/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CYCLE-CONSISTENT SELF-SUPERVISED LEARNING FOR IMPROVED HIGHLY-ACCELERATED MRI RECONSTRUCTION. 周期一致的自我监督学习提高高加速mri重建。
Pub Date : 2024-05-01 Epub Date: 2024-08-22 DOI: 10.1109/isbi56570.2024.10635895
Chi Zhang, Omer Burak Demirel, Mehmet Akçakaya

Physics-driven deep learning (PD-DL) has become a powerful tool for accelerated MRI. Recent developments have also developed unsupervised learning for PD-DL, including self-supervised learning. However, at very high acceleration rates, such approaches show performance deterioration. In this study, we propose to use cyclic-consistency (CC) to improve self-supervised learning for highly accelerated MRI. In our proposed CC, simulated measurements are obtained by undersampling the network output using patterns drawn from the same distribution as the true one. The reconstructions of these simulated measurements are obtained using the same network, which are then compared to the acquired data at the true sampling locations. This CC approach is used in conjunction with a masking-based self-supervised loss. Results show that the proposed method can substantially reduce aliasing artifacts at high acceleration rates, including rate 6 and 8 fastMRI knee imaging and 20-fold HCP-style fMRI.

物理驱动的深度学习(PD-DL)已经成为加速MRI的有力工具。近年来PD-DL的无监督学习也得到了发展,包括自监督学习。然而,在非常高的加速速率下,这种方法表现出性能下降。在这项研究中,我们建议使用循环一致性(CC)来改进高加速MRI的自监督学习。在我们提出的CC中,模拟测量是通过使用与真实分布相同的分布绘制的模式对网络输出进行欠采样来获得的。使用相同的网络获得这些模拟测量的重建,然后将其与在真实采样位置获得的数据进行比较。这种CC方法与基于掩蔽的自监督损失结合使用。结果表明,该方法可以在高加速速率下大幅减少混叠伪影,包括速率为6和8的fastMRI膝关节成像和20倍hcp式fMRI。
{"title":"CYCLE-CONSISTENT SELF-SUPERVISED LEARNING FOR IMPROVED HIGHLY-ACCELERATED MRI RECONSTRUCTION.","authors":"Chi Zhang, Omer Burak Demirel, Mehmet Akçakaya","doi":"10.1109/isbi56570.2024.10635895","DOIUrl":"10.1109/isbi56570.2024.10635895","url":null,"abstract":"<p><p>Physics-driven deep learning (PD-DL) has become a powerful tool for accelerated MRI. Recent developments have also developed unsupervised learning for PD-DL, including self-supervised learning. However, at very high acceleration rates, such approaches show performance deterioration. In this study, we propose to use cyclic-consistency (CC) to improve self-supervised learning for highly accelerated MRI. In our proposed CC, simulated measurements are obtained by undersampling the network output using patterns drawn from the same distribution as the true one. The reconstructions of these simulated measurements are obtained using the same network, which are then compared to the acquired data at the true sampling locations. This CC approach is used in conjunction with a masking-based self-supervised loss. Results show that the proposed method can substantially reduce aliasing artifacts at high acceleration rates, including rate 6 and 8 fastMRI knee imaging and 20-fold HCP-style fMRI.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11736014/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143017995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PARKINSON'S DISEASE CLASSIFICATION USING CONTRASTIVE GRAPH CROSS-VIEW LEARNING WITH MULTIMODAL FUSION OF SPECT IMAGES AND CLINICAL FEATURES. 利用对比图交叉视图学习与光谱图像和临床特征的多模态融合进行帕金森病分类。
Pub Date : 2024-05-01 Epub Date: 2024-08-22 DOI: 10.1109/isbi56570.2024.10635712
Jun-En Ding, Chien-Chin Hsu, Feng Liu

Parkinson's Disease (PD) affects millions globally, impacting movement. Prior research utilized deep learning for PD prediction, primarily focusing on medical images, neglecting the data's underlying manifold structure. This work proposes a multimodal approach encompassing both image and non-image features, leveraging contrastive cross-view graph fusion for PD classification. We introduce a novel multimodal co-attention module, integrating embeddings from separate graph views derived from low-dimensional representations of images and clinical features. This enables more robust and structured feature extraction for improved multi-view data analysis. Additionally, a simplified contrastive loss-based fusion method is devised to enhance cross-view fusion learning. Our graph-view multimodal approach achieves an accuracy of 91% and an area under the receiver operating characteristic curve (AUC) of 92.8% in five-fold cross-validation. It also demonstrates superior predictive capabilities on non-image data compared to solely machine learning-based methods.

帕金森病(PD)影响着全球数百万人的运动。之前的研究利用深度学习进行帕金森病预测,主要侧重于医学图像,忽略了数据的底层流形结构。本研究提出了一种包含图像和非图像特征的多模态方法,利用对比性跨视图图融合进行帕金森病分类。我们引入了一个新颖的多模态协同关注模块,整合了从图像和临床特征的低维表示中获得的独立图视图嵌入。这使得特征提取更加稳健和结构化,从而改进了多视图数据分析。此外,我们还设计了一种基于对比损失的简化融合方法,以加强跨视图融合学习。我们的图视图多模态方法在五倍交叉验证中达到了 91% 的准确率和 92.8% 的接收器工作特征曲线下面积 (AUC)。与单纯基于机器学习的方法相比,该方法在非图像数据上也表现出了卓越的预测能力。
{"title":"PARKINSON'S DISEASE CLASSIFICATION USING CONTRASTIVE GRAPH CROSS-VIEW LEARNING WITH MULTIMODAL FUSION OF SPECT IMAGES AND CLINICAL FEATURES.","authors":"Jun-En Ding, Chien-Chin Hsu, Feng Liu","doi":"10.1109/isbi56570.2024.10635712","DOIUrl":"https://doi.org/10.1109/isbi56570.2024.10635712","url":null,"abstract":"<p><p>Parkinson's Disease (PD) affects millions globally, impacting movement. Prior research utilized deep learning for PD prediction, primarily focusing on medical images, neglecting the data's underlying manifold structure. This work proposes a multimodal approach encompassing both image and non-image features, leveraging contrastive cross-view graph fusion for PD classification. We introduce a novel multimodal co-attention module, integrating embeddings from separate graph views derived from low-dimensional representations of images and clinical features. This enables more robust and structured feature extraction for improved multi-view data analysis. Additionally, a simplified contrastive loss-based fusion method is devised to enhance cross-view fusion learning. Our graph-view multimodal approach achieves an accuracy of 91% and an area under the receiver operating characteristic curve (AUC) of 92.8% in five-fold cross-validation. It also demonstrates superior predictive capabilities on non-image data compared to solely machine learning-based methods.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11467967/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142482684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PVTAD: ALZHEIMER'S DISEASE DIAGNOSIS USING PYRAMID VISION TRANSFORMER APPLIED TO WHITE MATTER OF T1-WEIGHTED STRUCTURAL MRI DATA. PVTAD:使用金字塔视觉转换器对 t1 加权结构 mri 数据的白质进行阿尔茨海默病诊断。
Pub Date : 2024-05-01 Epub Date: 2024-08-22 DOI: 10.1109/isbi56570.2024.10635541
Maryam Akhavan Aghdam, Serdar Bozdag, Fahad Saeed

Alzheimer's disease (AD) is a neurodegenerative disorder, and timely diagnosis is crucial for early interventions. AD is known to have disruptive local and global brain neural connections that may be instrumental in understanding and extracting specific biomarkers. Existing machine-learning approaches are mostly based on convolutional neural network (CNN) and standard vision transformer (ViT) models, which may not sufficiently capture the multidimensional local and global patterns indicative of AD. Therefore, in this paper, we propose a novel approach called PVTAD to classify AD and cognitively normal (CN) cases using pretrained pyramid vision transformer (PVT) and white matter (WM) of T1-weighted structural MRI (sMRI) data. Our approach combines the advantages of CNN and standard ViT to extract both local and global features indicative of AD from the WM coronal middle slices. We performed experiments on subjects with T1-weighed MPRAGE sMRI scans from the ADNI dataset. Our results demonstrate that the PVTAD achieves an average accuracy of 97.7% and an F1-score of 97.6%, outperforming the single and parallel CNN and standard ViT based on sMRI data for AD vs. CN classification. Our code is available at https://github.com/pcdslab/PVTAD.

阿尔茨海默病(AD)是一种神经退行性疾病,及时诊断对早期干预至关重要。已知AD具有破坏性的局部和全局大脑神经连接,这可能有助于理解和提取特定的生物标志物。现有的机器学习方法主要基于卷积神经网络(CNN)和标准视觉变换(ViT)模型,这些模型可能无法充分捕获指示AD的多维局部和全局模式。因此,在本文中,我们提出了一种称为PVTAD的新方法,利用预训练的金字塔视觉转换器(PVT)和t1加权结构MRI (sMRI)数据的白质(WM)对AD和认知正常(CN)病例进行分类。我们的方法结合了CNN和标准ViT的优点,从WM冠状中切片中提取AD的局部和全局特征。我们对使用来自ADNI数据集的t1加权MPRAGE sMRI扫描的受试者进行了实验。我们的研究结果表明,PVTAD的平均准确率为97.7%,f1得分为97.6%,优于基于sMRI数据的单一和并行CNN和标准ViT对AD和CN的分类。我们的代码可在https://github.com/pcdslab/PVTAD上获得。
{"title":"PVTAD: ALZHEIMER'S DISEASE DIAGNOSIS USING PYRAMID VISION TRANSFORMER APPLIED TO WHITE MATTER OF T1-WEIGHTED STRUCTURAL MRI DATA.","authors":"Maryam Akhavan Aghdam, Serdar Bozdag, Fahad Saeed","doi":"10.1109/isbi56570.2024.10635541","DOIUrl":"10.1109/isbi56570.2024.10635541","url":null,"abstract":"<p><p>Alzheimer's disease (AD) is a neurodegenerative disorder, and timely diagnosis is crucial for early interventions. AD is known to have disruptive local and global brain neural connections that may be instrumental in understanding and extracting specific biomarkers. Existing machine-learning approaches are mostly based on convolutional neural network (CNN) and standard vision transformer (ViT) models, which may not sufficiently capture the multidimensional local and global patterns indicative of AD. Therefore, in this paper, we propose a novel approach called <i>PVTAD</i> to classify AD and cognitively normal (CN) cases using pretrained pyramid vision transformer (PVT) and white matter (WM) of T1-weighted structural MRI (sMRI) data. Our approach combines the advantages of CNN and standard ViT to extract both local and global features indicative of AD from the WM coronal middle slices. We performed experiments on subjects with T1-weighed MPRAGE sMRI scans from the ADNI dataset. Our results demonstrate that the PVTAD achieves an average accuracy of 97.7% and an F1-score of 97.6%, outperforming the single and parallel CNN and standard ViT based on sMRI data for AD vs. CN classification. Our code is available at https://github.com/pcdslab/PVTAD.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11877309/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143560274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MEMORY-EFFICIENT DEEP END-TO-END POSTERIOR NETWORK (DEEPEN) FOR INVERSE PROBLEMS. 求解逆问题的高效记忆深度端到端后验网络。
Pub Date : 2024-05-01 Epub Date: 2024-08-22 DOI: 10.1109/isbi56570.2024.10635179
Jyothi Rikhab Chand, Mathews Jacob

End-to-End (E2E) unrolled optimization frameworks show promise for Magnetic Resonance (MR) image recovery, but suffer from high memory usage during training. In addition, these deterministic approaches do not offer opportunities for sampling from the posterior distribution. In this paper, we introduce a memory-efficient approach for E2E learning of the posterior distribution. We represent this distribution as the combination of a data-consistency-induced likelihood term and an energy model for the prior, parameterized by a Convolutional Neural Network (CNN). The CNN weights are learned from training data in an E2E fashion using maximum likelihood optimization. The learned model enables the recovery of images from undersampled measurements using the Maximum A Posteriori (MAP) optimization. In addition, the posterior model can be sampled to derive uncertainty maps about the reconstruction. Experiments on parallel MR image reconstruction show that our approach performs comparable to the memory-intensive E2E unrolled algorithm, performs better than its memory-efficient counterpart, and can provide uncertainty maps. Our framework paves the way towards MR image reconstruction in 3D and higher dimensions.

端到端(E2E)展开优化框架显示了磁共振(MR)图像恢复的希望,但在训练期间存在高内存占用的问题。此外,这些确定性方法不提供从后验分布中抽样的机会。在本文中,我们介绍了一种用于后验分布的E2E学习的高效记忆方法。我们将这种分布表示为数据一致性诱导的似然项和先验的能量模型的组合,由卷积神经网络(CNN)参数化。CNN的权重是用最大似然优化从训练数据中以端到端方式学习的。学习到的模型可以使用最大后验A (MAP)优化从欠采样测量中恢复图像。此外,可以对后验模型进行采样,以获得重建的不确定性图。并行磁共振图像重建实验表明,该方法的性能与内存密集型E2E展开算法相当,优于内存高效的对等算法,并且可以提供不确定性映射。我们的框架为3D和更高维度的MR图像重建铺平了道路。
{"title":"MEMORY-EFFICIENT DEEP END-TO-END POSTERIOR NETWORK (DEEPEN) FOR INVERSE PROBLEMS.","authors":"Jyothi Rikhab Chand, Mathews Jacob","doi":"10.1109/isbi56570.2024.10635179","DOIUrl":"https://doi.org/10.1109/isbi56570.2024.10635179","url":null,"abstract":"<p><p>End-to-End (E2E) unrolled optimization frameworks show promise for Magnetic Resonance (MR) image recovery, but suffer from high memory usage during training. In addition, these deterministic approaches do not offer opportunities for sampling from the posterior distribution. In this paper, we introduce a memory-efficient approach for E2E learning of the posterior distribution. We represent this distribution as the combination of a data-consistency-induced likelihood term and an energy model for the prior, parameterized by a Convolutional Neural Network (CNN). The CNN weights are learned from training data in an E2E fashion using maximum likelihood optimization. The learned model enables the recovery of images from undersampled measurements using the Maximum A Posteriori (MAP) optimization. In addition, the posterior model can be sampled to derive uncertainty maps about the reconstruction. Experiments on parallel MR image reconstruction show that our approach performs comparable to the memory-intensive E2E unrolled algorithm, performs better than its memory-efficient counterpart, and can provide uncertainty maps. Our framework paves the way towards MR image reconstruction in 3D and higher dimensions.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12381932/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144981885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RESOLUTION- AND STIMULUS-AGNOSTIC SUPER-RESOLUTION OF ULTRA-HIGH-FIELD FUNCTIONAL MRI: APPLICATION TO VISUAL STUDIES. 超高场功能性核磁共振成像的分辨率与刺激无关的超分辨率:在视觉研究中的应用。
Pub Date : 2024-05-01 Epub Date: 2024-08-22 DOI: 10.1109/isbi56570.2024.10635270
Hongwei Bran Li, Matthew S Rosen, Shahin Nasr, Juan Eugenio Iglesias

High-resolution fMRI provides a window into the brain's mesoscale organization. Yet, higher spatial resolution increases scan times, to compensate for the low signal and contrast-to-noise ratio. This work introduces a deep learning-based 3D super-resolution (SR) method for fMRI. By incorporating a resolution-agnostic image augmentation framework, our method adapts to varying voxel sizes without retraining. We apply this innovative technique to localize fine-scale motion-selective sites in the early visual areas. Detection of these sites typically requires ≤ 1mm isotropic data, whereas here, we visualize them based on lower resolution (2-3mm isotropic) fMRI data. Remarkably, the super-resolved fMRI is able to recover high-frequency detail of the interdigitated organization of these sites (relative to the color-selective sites), even with training data sourced from different subjects and experimental paradigms - including non-visual resting-state fMRI, underscoring its robustness and versatility. Quantitative and qualitative results indicate that our method has the potential to enhance the spatial resolution of fMRI, leading to a drastic reduction in acquisition time.

高分辨率的功能磁共振成像为研究大脑的中尺度组织提供了一个窗口。然而,更高的空间分辨率增加了扫描时间,以弥补低信号和对比度噪声比。本文介绍了一种基于深度学习的fMRI三维超分辨率(SR)方法。通过结合分辨率无关的图像增强框架,我们的方法无需重新训练即可适应不同的体素大小。我们应用这种创新技术来定位早期视觉区域的精细运动选择位点。检测这些位点通常需要≤1mm各向同性数据,而在这里,我们基于较低分辨率(2-3mm各向同性)的fMRI数据将它们可视化。值得注意的是,即使使用来自不同受试者和实验范式的训练数据(包括非视觉静息状态fMRI),超分辨率fMRI也能够恢复这些位点交叉组织的高频细节(相对于颜色选择位点),强调了其稳健性和多功能性。定量和定性结果表明,我们的方法有可能提高fMRI的空间分辨率,从而大大减少采集时间。
{"title":"RESOLUTION- AND STIMULUS-AGNOSTIC SUPER-RESOLUTION OF ULTRA-HIGH-FIELD FUNCTIONAL MRI: APPLICATION TO VISUAL STUDIES.","authors":"Hongwei Bran Li, Matthew S Rosen, Shahin Nasr, Juan Eugenio Iglesias","doi":"10.1109/isbi56570.2024.10635270","DOIUrl":"https://doi.org/10.1109/isbi56570.2024.10635270","url":null,"abstract":"<p><p>High-resolution fMRI provides a window into the brain's mesoscale organization. Yet, higher spatial resolution increases scan times, to compensate for the low signal and contrast-to-noise ratio. This work introduces a deep learning-based 3D super-resolution (SR) method for fMRI. By incorporating a resolution-agnostic image augmentation framework, our method adapts to varying voxel sizes <i>without retraining</i>. We apply this innovative technique to localize fine-scale motion-selective sites in the early visual areas. Detection of these sites typically requires ≤ 1mm isotropic data, whereas here, we visualize them based on lower resolution (2-3mm isotropic) fMRI data. Remarkably, the super-resolved fMRI is able to recover high-frequency detail of the interdigitated organization of these sites (relative to the color-selective sites), even with training data sourced from different subjects and experimental paradigms - including non-visual resting-state fMRI, underscoring its robustness and versatility. Quantitative and qualitative results indicate that our method has the potential to enhance the spatial resolution of fMRI, leading to a drastic reduction in acquisition time.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12376370/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144981913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MULTISCALE ESTIMATION OF MORPHOMETRICITY FOR REVEALING NEUROANATOMICAL BASIS OF COGNITIVE TRAITS. 多尺度形态计量学估算揭示认知特征的神经解剖学基础。
Pub Date : 2024-05-01 Epub Date: 2024-08-22 DOI: 10.1109/isbi56570.2024.10635581
Zixuan Wen, Jingxuan Bao, Shu Yang, Junhao Wen, Qipeng Zhan, Yuhan Cui, Guray Erus, Zhijian Yang, Paul M Thompson, Yize Zhao, Christos Davatzikos, Li Shen

Morphometricity examines the global statistical association between brain morphology and an observable trait, and is defined as the proportion of the trait variation attributable to brain morphology. In this work, we propose an accurate morphometricity estimator based on the generalized random effects (GRE) model, and perform morphometricity analyses on five cognitive traits in an Alzheimer's study. Our empirical study shows that the proposed GRE model outperforms the widely used LME model on both simulation and real data. In addition, we extend morphometricity estimation from the whole brain to the focal-brain level, and examine and quantify both global and regional neuroanatomical signatures of the cognitive traits. Our global analysis reveals 1) a relatively strong anatomical basis for ADAS13, 2) intermediate ones for MMSE, CDRSB and FAQ, and 3) a relatively weak one for RAVLT.learning. The top associations identified from our regional morphometricity analysis include those between all five cognitive traits and multiple regions such as hippocampus, amygdala, and inferior lateral ventricles. As expected, the identified regional associations are weaker than the global ones. While the whole brain analysis is more powerful in identifying higher morphometricity, the regional analysis could localize the neuroanatomical signatures of the studied cognitive traits and thus provide valuable information in imaging and cognitive biomarker discovery for normal and/or disordered brain research.

形态计量学研究大脑形态与可观察特质之间的整体统计关联,其定义是大脑形态在特质变异中所占的比例。在这项工作中,我们提出了一种基于广义随机效应(GRE)模型的精确形态计量学估计方法,并在一项阿尔茨海默氏症研究中对五种认知特质进行了形态计量学分析。我们的实证研究表明,所提出的 GRE 模型在模拟和真实数据上都优于广泛使用的 LME 模型。此外,我们还将形态计量估计从全脑扩展到了局灶脑水平,并对认知特征的全局和区域神经解剖特征进行了研究和量化。我们的全局分析表明:1)ADAS13 的解剖学基础相对较强;2)MMSE、CDRSB 和 FAQ 的解剖学基础居中;3)RAVLT.learning 的解剖学基础相对较弱。从我们的区域形态计量学分析中发现的首要关联包括所有五个认知特质与多个区域(如海马、杏仁核和下侧脑室)之间的关联。不出所料,区域关联弱于整体关联。虽然全脑分析在识别更高的形态计量学方面更强大,但区域分析可以定位所研究认知特征的神经解剖特征,从而为正常和/或失调大脑研究的成像和认知生物标记物发现提供有价值的信息。
{"title":"MULTISCALE ESTIMATION OF MORPHOMETRICITY FOR REVEALING NEUROANATOMICAL BASIS OF COGNITIVE TRAITS.","authors":"Zixuan Wen, Jingxuan Bao, Shu Yang, Junhao Wen, Qipeng Zhan, Yuhan Cui, Guray Erus, Zhijian Yang, Paul M Thompson, Yize Zhao, Christos Davatzikos, Li Shen","doi":"10.1109/isbi56570.2024.10635581","DOIUrl":"10.1109/isbi56570.2024.10635581","url":null,"abstract":"<p><p>Morphometricity examines the global statistical association between brain morphology and an observable trait, and is defined as the proportion of the trait variation attributable to brain morphology. In this work, we propose an accurate morphometricity estimator based on the generalized random effects (GRE) model, and perform morphometricity analyses on five cognitive traits in an Alzheimer's study. Our empirical study shows that the proposed GRE model outperforms the widely used LME model on both simulation and real data. In addition, we extend morphometricity estimation from the whole brain to the focal-brain level, and examine and quantify both global and regional neuroanatomical signatures of the cognitive traits. Our global analysis reveals 1) a relatively strong anatomical basis for ADAS13, 2) intermediate ones for MMSE, CDRSB and FAQ, and 3) a relatively weak one for RAVLT.learning. The top associations identified from our regional morphometricity analysis include those between all five cognitive traits and multiple regions such as hippocampus, amygdala, and inferior lateral ventricles. As expected, the identified regional associations are weaker than the global ones. While the whole brain analysis is more powerful in identifying higher morphometricity, the regional analysis could localize the neuroanatomical signatures of the studied cognitive traits and thus provide valuable information in imaging and cognitive biomarker discovery for normal and/or disordered brain research.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11452152/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ROBUST OUTER VOLUME SUBTRACTION WITH DEEP LEARNING GHOSTING DETECTION FOR HIGHLY-ACCELERATED REAL-TIME DYNAMIC MRI. 基于深度学习重影检测的高加速实时动态mri鲁棒外体积减法。
Pub Date : 2024-05-01 Epub Date: 2024-08-22 DOI: 10.1109/isbi56570.2024.10635530
Merve Gülle, Mehmet Akçakaya

Real-time dynamic MRI is important for visualizing time-varying processes in several applications, including cardiac imaging, where it enables free-breathing images of the beating heart without ECG gating. However, current real-time MRI techniques commonly face challenges in achieving the required spatio-temporal resolutions due to limited acceleration rates. In this study, we propose a deep learning (DL) technique for improving the estimation of stationary outer-volume signal from shifted time-interleaved undersampling patterns. Our approach utilizes the pseudo-periodic nature of the ghosting artifacts arising from the moving organs. Subsequently, this estimated outer-volume signal is subtracted from individual timeframes of the real-time MR time series, and each timeframe is reconstructed individually using physics-driven DL methods. Results show improved image quality at high acceleration rates, where conventional methods fail.

实时动态MRI在一些应用中对时变过程的可视化很重要,包括心脏成像,它可以在没有ECG门控的情况下实现心脏跳动的自由呼吸图像。然而,由于有限的加速速率,当前的实时MRI技术在实现所需的时空分辨率方面通常面临挑战。在这项研究中,我们提出了一种深度学习(DL)技术来改进从移位时交错欠采样模式中估计平稳外体积信号。我们的方法利用了由运动器官产生的伪周期伪影的特性。随后,从实时MR时间序列的单个时间框架中减去该估计的外部体积信号,并使用物理驱动的DL方法单独重建每个时间框架。结果表明,在高加速速率下,图像质量得到了改善,而传统方法却无法做到这一点。
{"title":"ROBUST OUTER VOLUME SUBTRACTION WITH DEEP LEARNING GHOSTING DETECTION FOR HIGHLY-ACCELERATED REAL-TIME DYNAMIC MRI.","authors":"Merve Gülle, Mehmet Akçakaya","doi":"10.1109/isbi56570.2024.10635530","DOIUrl":"10.1109/isbi56570.2024.10635530","url":null,"abstract":"<p><p>Real-time dynamic MRI is important for visualizing time-varying processes in several applications, including cardiac imaging, where it enables free-breathing images of the beating heart without ECG gating. However, current real-time MRI techniques commonly face challenges in achieving the required spatio-temporal resolutions due to limited acceleration rates. In this study, we propose a deep learning (DL) technique for improving the estimation of stationary outer-volume signal from shifted time-interleaved undersampling patterns. Our approach utilizes the pseudo-periodic nature of the ghosting artifacts arising from the moving organs. Subsequently, this estimated outer-volume signal is subtracted from individual timeframes of the real-time MR time series, and each timeframe is reconstructed individually using physics-driven DL methods. Results show improved image quality at high acceleration rates, where conventional methods fail.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11742269/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143017997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DEEP IMAGE PRIOR WITH STRUCTURED SPARSITY (DISCUS) FOR DYNAMIC MRI RECONSTRUCTION. 具有结构稀疏度(铁饼)的深度图像先验用于动态mri重建。
Pub Date : 2024-05-01 Epub Date: 2024-08-22 DOI: 10.1109/isbi56570.2024.10635579
Muhammad Ahmad Sultan, Chong Chen, Yingmin Liu, Xuan Lei, Rizwan Ahmad

High-quality training data are not always available in dynamic MRI. To address this, we propose a self-supervised deep learning method called deep image prior with structured sparsity (DISCUS) for reconstructing dynamic images. DISCUS is inspired by deep image prior (DIP) and recovers a series of images through joint optimization of network parameters and input code vectors. However, DISCUS additionally encourages group sparsity on frame-specific code vectors to discover the low-dimensional manifold that describes temporal variations across frames. Compared to prior work on manifold learning, DISCUS does not require specifying the manifold dimensionality. We validate DISCUS using three numerical studies. In the first study, we simulate a dynamic Shepp-Logan phantom with frames undergoing random rotations, translations, or both, and demonstrate that DISCUS can discover the dimensionality of the underlying manifold. In the second study, we use data from a realistic late gadolinium enhancement (LGE) phantom to compare DISCUS with compressed sensing (CS) and DIP, and to demonstrate the positive impact of group sparsity. In the third study, we use retrospectively undersampled single-shot LGE data from five patients to compare DISCUS with CS reconstructions. The results from these studies demonstrate that DISCUS outperforms CS and DIP, and that enforcing group sparsity on the code vectors helps discover true manifold dimensionality and provides additional performance gain.

高质量的训练数据在动态MRI中并不总是可用的。为了解决这个问题,我们提出了一种自监督深度学习方法,称为深度图像先验结构稀疏(DISCUS),用于重建动态图像。DISCUS受深度图像先验(DIP)的启发,通过网络参数和输入码向量的联合优化恢复一系列图像。然而,DISCUS还鼓励在特定帧的代码向量上进行组稀疏性,以发现描述跨帧时间变化的低维流形。与先前的流形学习工作相比,DISCUS不需要指定流形维度。我们用三个数值研究验证了DISCUS。在第一项研究中,我们模拟了一个动态的Shepp-Logan幻影,其中帧经历随机旋转,平移或两者兼有,并证明了DISCUS可以发现底层流形的维度。在第二项研究中,我们使用了一个真实的晚期钆增强(LGE)幻像的数据来比较DISCUS与压缩感知(CS)和DIP,并证明了群稀疏性的积极影响。在第三项研究中,我们使用来自5名患者的回顾性低采样单次LGE数据来比较DISCUS和CS重建。这些研究的结果表明,DISCUS优于CS和DIP,并且在代码向量上执行组稀疏性有助于发现真正的多维度并提供额外的性能增益。
{"title":"DEEP IMAGE PRIOR WITH STRUCTURED SPARSITY (DISCUS) FOR DYNAMIC MRI RECONSTRUCTION.","authors":"Muhammad Ahmad Sultan, Chong Chen, Yingmin Liu, Xuan Lei, Rizwan Ahmad","doi":"10.1109/isbi56570.2024.10635579","DOIUrl":"10.1109/isbi56570.2024.10635579","url":null,"abstract":"<p><p>High-quality training data are not always available in dynamic MRI. To address this, we propose a self-supervised deep learning method called deep image prior with structured sparsity (DISCUS) for reconstructing dynamic images. DISCUS is inspired by deep image prior (DIP) and recovers a series of images through joint optimization of network parameters and input code vectors. However, DISCUS additionally encourages group sparsity on frame-specific code vectors to discover the low-dimensional manifold that describes temporal variations across frames. Compared to prior work on manifold learning, DISCUS does not require specifying the manifold dimensionality. We validate DISCUS using three numerical studies. In the first study, we simulate a dynamic Shepp-Logan phantom with frames undergoing random rotations, translations, or both, and demonstrate that DISCUS can discover the dimensionality of the underlying manifold. In the second study, we use data from a realistic late gadolinium enhancement (LGE) phantom to compare DISCUS with compressed sensing (CS) and DIP, and to demonstrate the positive impact of group sparsity. In the third study, we use retrospectively undersampled single-shot LGE data from five patients to compare DISCUS with CS reconstructions. The results from these studies demonstrate that DISCUS outperforms CS and DIP, and that enforcing group sparsity on the code vectors helps discover true manifold dimensionality and provides additional performance gain.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12063720/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144024834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SURFACE COIL INTENSITY CORRECTION FOR MRI. mri表面线圈强度校正。
Pub Date : 2024-05-01 Epub Date: 2024-08-22 DOI: 10.1109/isbi56570.2024.10635382
Xuan Lei, Philip Schniter, Chong Chen, Muhammad Ahmad Sultan, Rizwan Ahmad

Modern MRI scanners utilize one or more arrays of small receive-only coils to collect k-space data. The sensitivity maps of the coils, when estimated using traditional methods, differ from the true sensitivity maps, which are generally unknown. Consequently, the reconstructed MR images exhibit undesired spatial variation in intensity. These intensity variations can be at least partially corrected using pre-scan data. In this work, we propose an intensity correction method that utilizes pre-scan data. For demonstration, we apply our method to a digital phantom, as well as to cardiac MRI data collected from a commercial scanner by Siemens Healthineers. The code is available at https://github.com/OSU-MR/SCC.

现代MRI扫描仪利用一个或多个小接收线圈阵列来收集k空间数据。线圈的灵敏度图,当使用传统方法估计时,不同于真实的灵敏度图,这通常是未知的。因此,重建的MR图像在强度上表现出不希望的空间变化。这些强度变化可以使用预扫描数据至少部分校正。在这项工作中,我们提出了一种利用预扫描数据的强度校正方法。为了演示,我们将我们的方法应用于数字幻影,以及从西门子Healthineers的商用扫描仪收集的心脏MRI数据。代码可在https://github.com/OSU-MR/SCC上获得。
{"title":"SURFACE COIL INTENSITY CORRECTION FOR MRI.","authors":"Xuan Lei, Philip Schniter, Chong Chen, Muhammad Ahmad Sultan, Rizwan Ahmad","doi":"10.1109/isbi56570.2024.10635382","DOIUrl":"10.1109/isbi56570.2024.10635382","url":null,"abstract":"<p><p>Modern MRI scanners utilize one or more arrays of small receive-only coils to collect k-space data. The sensitivity maps of the coils, when estimated using traditional methods, differ from the true sensitivity maps, which are generally unknown. Consequently, the reconstructed MR images exhibit undesired spatial variation in intensity. These intensity variations can be at least partially corrected using pre-scan data. In this work, we propose an intensity correction method that utilizes pre-scan data. For demonstration, we apply our method to a digital phantom, as well as to cardiac MRI data collected from a commercial scanner by Siemens Healthineers. The code is available at https://github.com/OSU-MR/SCC.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12063721/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144014693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings. IEEE International Symposium on Biomedical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1