首页 > 最新文献

Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention最新文献

英文 中文
Modularity-Constrained Dynamic Representation Learning for Interpretable Brain Disorder Analysis with Functional MRI. 利用功能性核磁共振成像进行可解释的大脑障碍分析的模块化约束动态表征学习
Qianqian Wang, Mengqi Wu, Yuqi Fang, Wei Wang, Lishan Qiao, Mingxia Liu

Resting-state functional MRI (rs-fMRI) is increasingly used to detect altered functional connectivity patterns caused by brain disorders, thereby facilitating objective quantification of brain pathology. Existing studies typically extract fMRI features using various machine/deep learning methods, but the generated imaging biomarkers are often challenging to interpret. Besides, the brain operates as a modular system with many cognitive/topological modules, where each module contains subsets of densely inter-connected regions-of-interest (ROIs) that are sparsely connected to ROIs in other modules. However, current methods cannot effectively characterize brain modularity. This paper proposes a modularity-constrained dynamic representation learning (MDRL) framework for interpretable brain disorder analysis with rs-fMRI. The MDRL consists of 3 parts: (1) dynamic graph construction, (2) modularity-constrained spatiotemporal graph neural network (MSGNN) for dynamic feature learning, and (3) prediction and biomarker detection. In particular, the MSGNN is designed to learn spatiotemporal dynamic representations of fMRI, constrained by 3 functional modules (i.e., central executive network, salience network, and default mode network). To enhance discriminative ability of learned features, we encourage the MSGNN to reconstruct network topology of input graphs. Experimental results on two public and one private datasets with a total of 1,155 subjects validate that our MDRL outperforms several state-of-the-art methods in fMRI-based brain disorder analysis. The detected fMRI biomarkers have good explainability and can be potentially used to improve clinical diagnosis.

静息态功能磁共振成像(rs-fMRI)越来越多地用于检测脑部疾病引起的功能连接模式改变,从而促进脑部病理的客观量化。现有研究通常使用各种机器/深度学习方法提取 fMRI 特征,但生成的成像生物标志物往往难以解释。此外,大脑是一个模块化系统,有许多认知/拓扑模块,每个模块都包含密集相互连接的兴趣区(ROI)子集,这些兴趣区与其他模块的 ROI 呈稀疏连接。然而,目前的方法无法有效描述大脑模块化的特征。本文提出了一种模块化约束动态表征学习(MDRL)框架,用于利用 rs-fMRI 进行可解释的大脑失调分析。MDRL 包括三个部分:(1)动态图构建;(2)用于动态特征学习的模块化约束时空图神经网络(MSGNN);(3)预测和生物标记检测。其中,MSGNN 的设计目的是在 3 个功能模块(即中央执行网络、显著性网络和默认模式网络)的约束下学习 fMRI 的时空动态表征。为了提高所学特征的判别能力,我们鼓励 MSGNN 重构输入图的网络拓扑结构。在两个公共数据集和一个私人数据集(共 1155 名受试者)上的实验结果验证了我们的 MDRL 在基于 fMRI 的脑失调分析中优于几种最先进的方法。检测到的 fMRI 生物标记物具有良好的可解释性,可用于改善临床诊断。
{"title":"Modularity-Constrained Dynamic Representation Learning for Interpretable Brain Disorder Analysis with Functional MRI.","authors":"Qianqian Wang, Mengqi Wu, Yuqi Fang, Wei Wang, Lishan Qiao, Mingxia Liu","doi":"10.1007/978-3-031-43907-0_5","DOIUrl":"10.1007/978-3-031-43907-0_5","url":null,"abstract":"<p><p>Resting-state functional MRI (rs-fMRI) is increasingly used to detect altered functional connectivity patterns caused by brain disorders, thereby facilitating objective quantification of brain pathology. Existing studies typically extract fMRI features using various machine/deep learning methods, but the generated imaging biomarkers are often challenging to interpret. Besides, the brain operates as a modular system with many cognitive/topological modules, where each module contains subsets of densely inter-connected regions-of-interest (ROIs) that are sparsely connected to ROIs in other modules. However, current methods cannot effectively characterize brain modularity. This paper proposes a modularity-constrained dynamic representation learning (MDRL) framework for interpretable brain disorder analysis with rs-fMRI. The MDRL consists of 3 parts: (1) dynamic graph construction, (2) modularity-constrained spatiotemporal graph neural network (MSGNN) for dynamic feature learning, and (3) prediction and biomarker detection. In particular, the MSGNN is designed to learn spatiotemporal dynamic representations of fMRI, constrained by 3 functional modules (<i>i.e.</i>, central executive network, salience network, and default mode network). To enhance discriminative ability of learned features, we encourage the MSGNN to reconstruct network topology of input graphs. Experimental results on two public and one private datasets with a total of 1,155 subjects validate that our MDRL outperforms several state-of-the-art methods in fMRI-based brain disorder analysis. The detected fMRI biomarkers have good explainability and can be potentially used to improve clinical diagnosis.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14220 ","pages":"46-56"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10883232/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139935019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Microstructure Fingerprinting for Heterogeneously Oriented Tissue Microenvironments. 用于异构定向组织微环境的微结构指纹识别技术
Khoi Minh Huynh, Ye Wu, Sahar Ahmad, Pew-Thian Yap

Most diffusion biophysical models capture basic properties of tissue microstructure, such as diffusivity and anisotropy. More realistic models that relate the diffusion-weighted signal to cell size and membrane permeability often require simplifying assumptions such as short gradient pulse and Gaussian phase distribution, leading to tissue features that are not necessarily quantitative. Here, we propose a method to quantify tissue microstructure without jeopardizing accuracy owing to unrealistic assumptions. Our method utilizes realistic signals simulated from the geometries of cellular microenvironments as fingerprints, which are then employed in a spherical mean estimation framework to disentangle the effects of orientation dispersion from microscopic tissue properties. We demonstrate the efficacy of microstructure fingerprinting in estimating intra-cellular, extra-cellular, and intra-soma volume fractions as well as axon radius, soma radius, and membrane permeability.

大多数扩散生物物理模型都能捕捉组织微观结构的基本特性,如扩散性和各向异性。更现实的模型将扩散加权信号与细胞大小和膜通透性联系起来,往往需要简化假设,如短梯度脉冲和高斯相位分布,从而导致组织特征不一定是定量的。在此,我们提出了一种量化组织微观结构的方法,而不会因为不切实际的假设而影响准确性。我们的方法利用从细胞微环境的几何形状模拟出的真实信号作为指纹,然后将其应用于球面均值估计框架,从而将取向分散的影响与微观组织特性区分开来。我们展示了微观结构指纹法在估算细胞内、细胞外和浆膜内体积分数以及轴突半径、浆膜半径和膜通透性方面的功效。
{"title":"Microstructure Fingerprinting for Heterogeneously Oriented Tissue Microenvironments.","authors":"Khoi Minh Huynh, Ye Wu, Sahar Ahmad, Pew-Thian Yap","doi":"10.1007/978-3-031-43993-3_13","DOIUrl":"10.1007/978-3-031-43993-3_13","url":null,"abstract":"<p><p>Most diffusion biophysical models capture basic properties of tissue microstructure, such as diffusivity and anisotropy. More realistic models that relate the diffusion-weighted signal to cell size and membrane permeability often require simplifying assumptions such as short gradient pulse and Gaussian phase distribution, leading to tissue features that are not necessarily quantitative. Here, we propose a method to quantify tissue microstructure without jeopardizing accuracy owing to unrealistic assumptions. Our method utilizes realistic signals simulated from the geometries of cellular microenvironments as fingerprints, which are then employed in a spherical mean estimation framework to disentangle the effects of orientation dispersion from microscopic tissue properties. We demonstrate the efficacy of microstructure fingerprinting in estimating intra-cellular, extra-cellular, and intra-soma volume fractions as well as axon radius, soma radius, and membrane permeability.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14227 ","pages":"131-141"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11315459/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141918477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid Multimodality Fusion with Cross-Domain Knowledge Transfer to Forecast Progression Trajectories in Cognitive Decline. 混合多模态融合与跨域知识转移,预测认知衰退的进展轨迹。
Minhui Yu, Yunbi Liu, Jinjian Wu, Andrea Bozoki, Shijun Qiu, Ling Yue, Mingxia Liu

Magnetic resonance imaging (MRI) and positron emission tomography (PET) are increasingly used to forecast progression trajectories of cognitive decline caused by preclinical and prodromal Alzheimer's disease (AD). Many existing studies have explored the potential of these two distinct modalities with diverse machine and deep learning approaches. But successfully fusing MRI and PET can be complex due to their unique characteristics and missing modalities. To this end, we develop a hybrid multimodality fusion (HMF) framework with cross-domain knowledge transfer for joint MRI and PET representation learning, feature fusion, and cognitive decline progression forecasting. Our HMF consists of three modules: 1) a module to impute missing PET images, 2) a module to extract multimodality features from MRI and PET images, and 3) a module to fuse the extracted multimodality features. To address the issue of small sample sizes, we employ a cross-domain knowledge transfer strategy from the ADNI dataset, which includes 795 subjects, to independent small-scale AD-related cohorts, in order to leverage the rich knowledge present within the ADNI. The proposed HMF is extensively evaluated in three AD-related studies with 272 subjects across multiple disease stages, such as subjective cognitive decline and mild cognitive impairment. Experimental results demonstrate the superiority of our method over several state-of-the-art approaches in forecasting progression trajectories of AD-related cognitive decline.

磁共振成像(MRI)和正电子发射断层扫描(PET)越来越多地被用于预测临床前和前驱阿尔茨海默病(AD)引起的认知能力下降的进展轨迹。现有的许多研究已经利用不同的机器学习和深度学习方法探索了这两种不同模式的潜力。但是,由于核磁共振成像和正电子发射计算机断层成像的独特性和缺失模式,成功融合这两种模式可能非常复杂。为此,我们开发了一种混合多模态融合(HMF)框架,该框架具有跨领域知识转移功能,可用于联合 MRI 和 PET 表征学习、特征融合和认知衰退进展预测。我们的混合多模态融合框架由三个模块组成:1)对缺失的 PET 图像进行补偿的模块;2)从 MRI 和 PET 图像中提取多模态特征的模块;3)对提取的多模态特征进行融合的模块。为了解决样本量小的问题,我们采用了跨领域知识转移策略,从包括 795 名受试者的 ADNI 数据集转移到独立的小规模 AD 相关队列,以充分利用 ADNI 中的丰富知识。拟议的 HMF 在三项 AD 相关研究中进行了广泛评估,研究对象包括 272 名受试者,涉及多个疾病阶段,如主观认知能力下降和轻度认知障碍。实验结果表明,在预测注意力缺失症相关认知能力下降的进展轨迹方面,我们的方法优于几种最先进的方法。
{"title":"Hybrid Multimodality Fusion with Cross-Domain Knowledge Transfer to Forecast Progression Trajectories in Cognitive Decline.","authors":"Minhui Yu, Yunbi Liu, Jinjian Wu, Andrea Bozoki, Shijun Qiu, Ling Yue, Mingxia Liu","doi":"10.1007/978-3-031-47425-5_24","DOIUrl":"10.1007/978-3-031-47425-5_24","url":null,"abstract":"<p><p>Magnetic resonance imaging (MRI) and positron emission tomography (PET) are increasingly used to forecast progression trajectories of cognitive decline caused by preclinical and prodromal Alzheimer's disease (AD). Many existing studies have explored the potential of these two distinct modalities with diverse machine and deep learning approaches. But successfully fusing MRI and PET can be complex due to their unique characteristics and missing modalities. To this end, we develop a hybrid multimodality fusion (HMF) framework with cross-domain knowledge transfer for joint MRI and PET representation learning, feature fusion, and cognitive decline progression forecasting. Our HMF consists of three modules: 1) a module to impute missing PET images, 2) a module to extract multimodality features from MRI and PET images, and 3) a module to fuse the extracted multimodality features. To address the issue of small sample sizes, we employ a cross-domain knowledge transfer strategy from the ADNI dataset, which includes 795 subjects, to independent small-scale AD-related cohorts, in order to leverage the rich knowledge present within the ADNI. The proposed HMF is extensively evaluated in three AD-related studies with 272 subjects across multiple disease stages, such as subjective cognitive decline and mild cognitive impairment. Experimental results demonstrate the superiority of our method over several state-of-the-art approaches in forecasting progression trajectories of AD-related cognitive decline.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14394 ","pages":"265-275"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10904401/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140023897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Breast Ultrasound Tumor Classification Using a Hybrid Multitask CNN-Transformer Network. 使用混合多任务 CNN-Transformer 网络进行乳腺超声肿瘤分类
Bryar Shareef, Min Xian, Aleksandar Vakanski, Haotian Wang

Capturing global contextual information plays a critical role in breast ultrasound (BUS) image classification. Although convolutional neural networks (CNNs) have demonstrated reliable performance in tumor classification, they have inherent limitations for modeling global and long-range dependencies due to the localized nature of convolution operations. Vision Transformers have an improved capability of capturing global contextual information but may distort the local image patterns due to the tokenization operations. In this study, we proposed a hybrid multitask deep neural network called Hybrid-MT-ESTAN, designed to perform BUS tumor classification and segmentation using a hybrid architecture composed of CNNs and Swin Transformer components. The proposed approach was compared to nine BUS classification methods and evaluated using seven quantitative metrics on a dataset of 3,320 BUS images. The results indicate that Hybrid-MT-ESTAN achieved the highest accuracy, sensitivity, and F1 score of 82.7%, 86.4%, and 86.0%, respectively.

捕捉全局上下文信息在乳腺超声(BUS)图像分类中起着至关重要的作用。虽然卷积神经网络(CNN)在肿瘤分类中表现出可靠的性能,但由于卷积操作的局部性,它们在模拟全局和长距离依赖关系方面存在固有的局限性。视觉变换器能更好地捕捉全局上下文信息,但由于标记化操作,可能会扭曲局部图像模式。在这项研究中,我们提出了一种名为 Hybrid-MT-ESTAN 的混合多任务深度神经网络,旨在使用由 CNN 和 Swin Transformer 组件组成的混合架构来执行 BUS 肿瘤分类和分割。我们将所提出的方法与九种 BUS 分类方法进行了比较,并在一个包含 3,320 张 BUS 图像的数据集上使用七个定量指标对其进行了评估。结果表明,Hybrid-MT-ESTAN 的准确度、灵敏度和 F1 得分最高,分别为 82.7%、86.4% 和 86.0%。
{"title":"Breast Ultrasound Tumor Classification Using a Hybrid Multitask CNN-Transformer Network.","authors":"Bryar Shareef, Min Xian, Aleksandar Vakanski, Haotian Wang","doi":"10.1007/978-3-031-43901-8_33","DOIUrl":"https://doi.org/10.1007/978-3-031-43901-8_33","url":null,"abstract":"<p><p>Capturing global contextual information plays a critical role in breast ultrasound (BUS) image classification. Although convolutional neural networks (CNNs) have demonstrated reliable performance in tumor classification, they have inherent limitations for modeling global and long-range dependencies due to the localized nature of convolution operations. Vision Transformers have an improved capability of capturing global contextual information but may distort the local image patterns due to the tokenization operations. In this study, we proposed a hybrid multitask deep neural network called Hybrid-MT-ESTAN, designed to perform BUS tumor classification and segmentation using a hybrid architecture composed of CNNs and Swin Transformer components. The proposed approach was compared to nine BUS classification methods and evaluated using seven quantitative metrics on a dataset of 3,320 BUS images. The results indicate that Hybrid-MT-ESTAN achieved the highest accuracy, sensitivity, and F1 score of 82.7%, 86.4%, and 86.0%, respectively.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14223 ","pages":"344-353"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11006090/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140871832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Flow-based Geometric Interpolation of Fiber Orientation Distribution Functions. 基于流动的纤维方向分布函数几何插值。
Xinyu Nie, Yonggang Shi

The fiber orientation distribution function (FOD) is an advanced model for high angular resolution diffusion MRI representing complex fiber geometry. However, the complicated mathematical structures of the FOD function pose challenges for FOD image processing tasks such as interpolation, which plays a critical role in the propagation of fiber tracts in tractography. In FOD-based tractography, linear interpolation is commonly used for numerical efficiency, but it is prone to generate false artificial information, leading to anatomically incorrect fiber tracts. To overcome this difficulty, we propose a flowbased and geometrically consistent interpolation framework that considers peak-wise rotations of FODs within the neighborhood of each location. Our method decomposes a FOD function into multiple components and uses a smooth vector field to model the flows of each peak in its neighborhood. To generate the interpolated result along the flow of each vector field, we develop a closed-form and efficient method to rotate FOD peaks in neighboring voxels and realize geometrically consistent interpolation of FOD components. By combining the interpolation results from each peak, we obtain the final interpolation of FODs. Experimental results on Human Connectome Project (HCP) data demonstrate that our method produces anatomically more meaningful FOD interpolations and significantly enhances tractography performance.

纤维取向分布函数(FOD)是一种先进的高角度分辨率扩散核磁共振成像模型,代表了复杂的纤维几何形状。然而,FOD 函数复杂的数学结构给 FOD 图像处理任务(如插值)带来了挑战,而插值在束流成像中纤维束的传播中起着至关重要的作用。在基于 FOD 的纤维束成像中,线性插值通常用于提高数值效率,但它容易产生虚假的人工信息,导致解剖学上不正确的纤维束。为了克服这一困难,我们提出了一种基于流的几何一致性插值框架,该框架考虑了每个位置邻域内 FOD 的峰值旋转。我们的方法将 FOD 函数分解为多个分量,并使用平滑矢量场对其邻域内每个峰值的流量进行建模。为了沿着每个矢量场的流向生成插值结果,我们开发了一种闭式高效方法来旋转邻近体素中的 FOD 峰,并实现 FOD 分量的几何一致性插值。通过合并每个峰值的插值结果,我们得到了最终的 FOD 插值结果。人类连接组计划(HCP)数据的实验结果表明,我们的方法产生的 FOD 插值在解剖学上更有意义,并显著提高了牵引成像的性能。
{"title":"Flow-based Geometric Interpolation of Fiber Orientation Distribution Functions.","authors":"Xinyu Nie, Yonggang Shi","doi":"10.1007/978-3-031-43993-3_5","DOIUrl":"10.1007/978-3-031-43993-3_5","url":null,"abstract":"<p><p>The fiber orientation distribution function (FOD) is an advanced model for high angular resolution diffusion MRI representing complex fiber geometry. However, the complicated mathematical structures of the FOD function pose challenges for FOD image processing tasks such as interpolation, which plays a critical role in the propagation of fiber tracts in tractography. In FOD-based tractography, linear interpolation is commonly used for numerical efficiency, but it is prone to generate false artificial information, leading to anatomically incorrect fiber tracts. To overcome this difficulty, we propose a flowbased and geometrically consistent interpolation framework that considers peak-wise rotations of FODs within the neighborhood of each location. Our method decomposes a FOD function into multiple components and uses a smooth vector field to model the flows of each peak in its neighborhood. To generate the interpolated result along the flow of each vector field, we develop a closed-form and efficient method to rotate FOD peaks in neighboring voxels and realize geometrically consistent interpolation of FOD components. By combining the interpolation results from each peak, we obtain the final interpolation of FODs. Experimental results on Human Connectome Project (HCP) data demonstrate that our method produces anatomically more meaningful FOD interpolations and significantly enhances tractography performance.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14227 ","pages":"46-55"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10978007/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140320351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Motion Compensated Unsupervised Deep Learning for 5D MRI. 用于 5D MRI 的运动补偿无监督深度学习。
Joseph Kettelkamp, Ludovica Romanin, Davide Piccini, Sarv Priya, Mathews Jacob

We propose an unsupervised deep learning algorithm for the motion-compensated reconstruction of 5D cardiac MRI data from 3D radial acquisitions. Ungated free-breathing 5D MRI simplifies the scan planning, improves patient comfort, and offers several clinical benefits over breath-held 2D exams, including isotropic spatial resolution and the ability to reslice the data to arbitrary views. However, the current reconstruction algorithms for 5D MRI take very long computational time, and their outcome is greatly dependent on the uniformity of the binning of the acquired data into different physiological phases. The proposed algorithm is a more data-efficient alternative to current motion-resolved reconstructions. This motion-compensated approach models the data in each cardiac/respiratory bin as Fourier samples of the deformed version of a 3D image template. The deformation maps are modeled by a convolutional neural network driven by the physiological phase information. The deformation maps and the template are then jointly estimated from the measured data. The cardiac and respiratory phases are estimated from 1D navigators using an auto-encoder. The proposed algorithm is validated on 5D bSSFP datasets acquired from two subjects.

我们提出了一种无监督深度学习算法,用于对三维径向采集的 5D 心脏 MRI 数据进行运动补偿重建。无盖自由呼吸 5D 磁共振成像简化了扫描计划,提高了患者的舒适度,与屏住呼吸的 2D 检查相比,它具有多种临床优势,包括各向同性的空间分辨率和将数据重新切片为任意视图的能力。然而,目前的 5D MRI 重建算法需要耗费很长的计算时间,而且其结果在很大程度上取决于将获取的数据按不同生理阶段进行分档的均匀性。与目前的运动分辨重建相比,所提出的算法是一种数据效率更高的替代方案。这种运动补偿方法将每个心脏/呼吸分区的数据建模为三维图像模板变形版本的傅立叶样本。变形图由生理相位信息驱动的卷积神经网络建模。然后根据测量数据对变形图和模板进行联合估算。心脏和呼吸相位是通过自动编码器从一维导航器估算出来的。所提出的算法在两个受试者的 5D bSSFP 数据集上得到了验证。
{"title":"Motion Compensated Unsupervised Deep Learning for 5D MRI.","authors":"Joseph Kettelkamp, Ludovica Romanin, Davide Piccini, Sarv Priya, Mathews Jacob","doi":"10.1007/978-3-031-43999-5_40","DOIUrl":"10.1007/978-3-031-43999-5_40","url":null,"abstract":"<p><p>We propose an unsupervised deep learning algorithm for the motion-compensated reconstruction of 5D cardiac MRI data from 3D radial acquisitions. Ungated free-breathing 5D MRI simplifies the scan planning, improves patient comfort, and offers several clinical benefits over breath-held 2D exams, including isotropic spatial resolution and the ability to reslice the data to arbitrary views. However, the current reconstruction algorithms for 5D MRI take very long computational time, and their outcome is greatly dependent on the uniformity of the binning of the acquired data into different physiological phases. The proposed algorithm is a more data-efficient alternative to current motion-resolved reconstructions. This motion-compensated approach models the data in each cardiac/respiratory bin as Fourier samples of the deformed version of a 3D image template. The deformation maps are modeled by a convolutional neural network driven by the physiological phase information. The deformation maps and the template are then jointly estimated from the measured data. The cardiac and respiratory phases are estimated from 1D navigators using an auto-encoder. The proposed algorithm is validated on 5D bSSFP datasets acquired from two subjects.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14229 ","pages":"419-427"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11087022/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140913632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Longitudinal Multimodal Transformer Integrating Imaging and Latent Clinical Signatures From Routine EHRs for Pulmonary Nodule Classification. 纵向多模态变换器整合常规电子病历中的成像和潜在临床特征,用于肺结节分类。
Thomas Z Li, John M Still, Kaiwen Xu, Ho Hin Lee, Leon Y Cai, Aravind R Krishnan, Riqiang Gao, Mirza S Khan, Sanja Antic, Michael Kammer, Kim L Sandler, Fabien Maldonado, Bennett A Landman, Thomas A Lasko

The accuracy of predictive models for solitary pulmonary nodule (SPN) diagnosis can be greatly increased by incorporating repeat imaging and medical context, such as electronic health records (EHRs). However, clinically routine modalities such as imaging and diagnostic codes can be asynchronous and irregularly sampled over different time scales which are obstacles to longitudinal multimodal learning. In this work, we propose a transformer-based multimodal strategy to integrate repeat imaging with longitudinal clinical signatures from routinely collected EHRs for SPN classification. We perform unsupervised disentanglement of latent clinical signatures and leverage time-distance scaled self-attention to jointly learn from clinical signatures expressions and chest computed tomography (CT) scans. Our classifier is pretrained on 2,668 scans from a public dataset and 1,149 subjects with longitudinal chest CTs, billing codes, medications, and laboratory tests from EHRs of our home institution. Evaluation on 227 subjects with challenging SPNs revealed a significant AUC improvement over a longitudinal multimodal baseline (0.824 vs 0.752 AUC), as well as improvements over a single cross-section multimodal scenario (0.809 AUC) and a longitudinal imaging-only scenario (0.741 AUC). This work demonstrates significant advantages with a novel approach for co-learning longitudinal imaging and non-imaging phenotypes with transformers. Code available at https://github.com/MASILab/lmsignatures.

通过结合重复成像和医疗背景(如电子健康记录(EHR)),可大大提高单发肺结节(SPN)诊断预测模型的准确性。然而,成像和诊断代码等临床常规模式在不同时间尺度上可能是异步和不规则采样的,这对纵向多模式学习构成了障碍。在这项工作中,我们提出了一种基于变压器的多模态策略,将重复成像与日常收集的电子病历中的纵向临床特征整合在一起,用于 SPN 分类。我们对潜在的临床特征进行了无监督的反纠缠,并利用时间距离缩放自关注来联合学习临床特征表达和胸部计算机断层扫描(CT)。我们的分类器是在公共数据集中的 2,668 次扫描和 1,149 名受试者的纵向胸部 CT、账单代码、药物和实验室测试上进行预训练的,这些数据来自我们所在机构的电子病历。对 227 名患有高难度 SPN 的受试者进行的评估显示,与纵向多模态基线(0.824 对 0.752 AUC)相比,AUC 有了显著提高,与单一横截面多模态方案(0.809 AUC)和仅纵向成像方案(0.741 AUC)相比,AUC 也有所提高。这项工作表明,利用变换器共同学习纵向成像和非成像表型的新方法具有显著优势。代码见 https://github.com/MASILab/lmsignatures。
{"title":"Longitudinal Multimodal Transformer Integrating Imaging and Latent Clinical Signatures From Routine EHRs for Pulmonary Nodule Classification.","authors":"Thomas Z Li, John M Still, Kaiwen Xu, Ho Hin Lee, Leon Y Cai, Aravind R Krishnan, Riqiang Gao, Mirza S Khan, Sanja Antic, Michael Kammer, Kim L Sandler, Fabien Maldonado, Bennett A Landman, Thomas A Lasko","doi":"10.1007/978-3-031-43895-0_61","DOIUrl":"10.1007/978-3-031-43895-0_61","url":null,"abstract":"<p><p>The accuracy of predictive models for solitary pulmonary nodule (SPN) diagnosis can be greatly increased by incorporating repeat imaging and medical context, such as electronic health records (EHRs). However, clinically routine modalities such as imaging and diagnostic codes can be asynchronous and irregularly sampled over different time scales which are obstacles to longitudinal multimodal learning. In this work, we propose a transformer-based multimodal strategy to integrate repeat imaging with longitudinal clinical signatures from routinely collected EHRs for SPN classification. We perform unsupervised disentanglement of latent clinical signatures and leverage time-distance scaled self-attention to jointly learn from clinical signatures expressions and chest computed tomography (CT) scans. Our classifier is pretrained on 2,668 scans from a public dataset and 1,149 subjects with longitudinal chest CTs, billing codes, medications, and laboratory tests from EHRs of our home institution. Evaluation on 227 subjects with challenging SPNs revealed a significant AUC improvement over a longitudinal multimodal baseline (0.824 vs 0.752 AUC), as well as improvements over a single cross-section multimodal scenario (0.809 AUC) and a longitudinal imaging-only scenario (0.741 AUC). This work demonstrates significant advantages with a novel approach for co-learning longitudinal imaging and non-imaging phenotypes with transformers. Code available at https://github.com/MASILab/lmsignatures.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14221 ","pages":"649-659"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11110542/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141081448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Brain Anatomy-Guided MRI Analysis for Assessing Clinical Progression of Cognitive Impairment with Structural MRI. 利用结构磁共振成像评估认知障碍临床进展的脑解剖学引导磁共振成像分析。
Lintao Zhang, Jinjian Wu, Lihong Wang, Li Wang, David C Steffens, Shijun Qiu, Guy G Potter, Mingxia Liu

Brain structural MRI has been widely used for assessing future progression of cognitive impairment (CI) based on learning-based methods. Previous studies generally suffer from the limited number of labeled training data, while there exists a huge amount of MRIs in large-scale public databases. Even without task-specific label information, brain anatomical structures provided by these MRIs can be used to boost learning performance intuitively. Unfortunately, existing research seldom takes advantage of such brain anatomy prior. To this end, this paper proposes a brain anatomy-guided representation (BAR) learning framework for assessing the clinical progression of cognitive impairment with T1-weighted MRIs. The BAR consists of a pretext model and a downstream model, with a shared brain anatomy-guided encoder for MRI feature extraction. The pretext model also contains a decoder for brain tissue segmentation, while the downstream model relies on a predictor for classification. We first train the pretext model through a brain tissue segmentation task on 9,544 auxiliary T1-weighted MRIs, yielding a generalizable encoder. The downstream model with the learned encoder is further fine-tuned on target MRIs for prediction tasks. We validate the proposed BAR on two CI-related studies with a total of 391 subjects with T1-weighted MRIs. Experimental results suggest that the BAR outperforms several state-of-the-art (SOTA) methods. The source code and pre-trained models are available at https://github.com/goodaycoder/BAR.

基于学习方法的脑结构磁共振成像已被广泛用于评估认知障碍(CI)的未来进展。以往的研究普遍存在标注训练数据数量有限的问题,而大规模公共数据库中存在大量核磁共振成像数据。即使没有特定任务的标签信息,这些核磁共振成像提供的大脑解剖结构也能直观地提高学习效率。遗憾的是,现有研究很少利用这些大脑解剖结构。为此,本文提出了一种大脑解剖引导表征(BAR)学习框架,用于通过 T1 加权核磁共振成像评估认知障碍的临床进展。BAR 由一个前置模型和一个下游模型组成,共享用于磁共振成像特征提取的脑解剖导向编码器。前导模型还包含一个用于脑组织分割的解码器,而下游模型则依靠一个预测器进行分类。我们首先通过对 9544 张辅助 T1 加权核磁共振图像进行脑组织分割任务来训练前置模型,从而获得可通用的编码器。使用所学编码器的下游模型在目标 MRI 上进一步微调,以完成预测任务。我们在两项与 CI 相关的研究中对所提出的 BAR 进行了验证,共有 391 名受试者接受了 T1 加权磁共振成像。实验结果表明,BAR 的性能优于几种最先进的 (SOTA) 方法。源代码和预训练模型可在 https://github.com/goodaycoder/BAR 上获取。
{"title":"Brain Anatomy-Guided MRI Analysis for Assessing Clinical Progression of Cognitive Impairment with Structural MRI.","authors":"Lintao Zhang, Jinjian Wu, Lihong Wang, Li Wang, David C Steffens, Shijun Qiu, Guy G Potter, Mingxia Liu","doi":"10.1007/978-3-031-43993-3_11","DOIUrl":"10.1007/978-3-031-43993-3_11","url":null,"abstract":"<p><p>Brain structural MRI has been widely used for assessing future progression of cognitive impairment (CI) based on learning-based methods. Previous studies generally suffer from the limited number of labeled training data, while there exists a huge amount of MRIs in large-scale public databases. Even without task-specific label information, brain anatomical structures provided by these MRIs can be used to boost learning performance intuitively. Unfortunately, existing research seldom takes advantage of such brain anatomy prior. To this end, this paper proposes a brain anatomy-guided representation (BAR) learning framework for assessing the clinical progression of cognitive impairment with T1-weighted MRIs. The BAR consists of a <i>pretext model</i> and a <i>downstream model</i>, with a shared brain anatomy-guided encoder for MRI feature extraction. The pretext model also contains a decoder for brain tissue segmentation, while the downstream model relies on a predictor for classification. We first train the pretext model through a brain tissue segmentation task on 9,544 auxiliary T1-weighted MRIs, yielding a generalizable encoder. The downstream model with the learned encoder is further fine-tuned on target MRIs for prediction tasks. We validate the proposed BAR on two CI-related studies with a total of 391 subjects with T1-weighted MRIs. Experimental results suggest that the BAR outperforms several state-of-the-art (SOTA) methods. The source code and pre-trained models are available at https://github.com/goodaycoder/BAR.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14227 ","pages":"109-119"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10883230/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139935020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Foundation Ark: Accruing and Reusing Knowledge for Superior and Robust Performance. 基础方舟:积累和重复使用知识,实现卓越而稳健的绩效。
DongAo Ma, Jiaxuan Pang, Michael B Gotway, Jianming Liang

Deep learning nowadays offers expert-level and sometimes even super-expert-level performance, but achieving such performance demands massive annotated data for training (e.g., Google's proprietary CXR Foundation Model (CXR-FM) was trained on 821,544 labeled and mostly private chest X-rays (CXRs)). Numerous datasets are publicly available in medical imaging but individually small and heterogeneous in expert labels. We envision a powerful and robust foundation model that can be trained by aggregating numerous small public datasets. To realize this vision, we have developed Ark, a framework that accrues and reuses knowledge from heterogeneous expert annotations in various datasets. As a proof of concept, we have trained two Ark models on 335,484 and 704,363 CXRs, respectively, by merging several datasets including ChestX-ray14, CheXpert, MIMIC-II, and VinDr-CXR, evaluated them on a wide range of imaging tasks covering both classification and segmentation via fine-tuning, linear-probing, and gender-bias analysis, and demonstrated our Ark's superior and robust performance over the state-of-the-art (SOTA) fully/self-supervised baselines and Google's proprietary CXR-FM. This enhanced performance is attributed to our simple yet powerful observation that aggregating numerous public datasets diversifies patient populations and accrues knowledge from diverse experts, yielding unprecedented performance yet saving annotation cost. With all codes and pretrained models released at GitHub.com/JLiangLab/Ark, we hope that Ark exerts an important impact on open science, as accruing and reusing knowledge from expert annotations in public datasets can potentially surpass the performance of proprietary models trained on unusually large data, inspiring many more researchers worldwide to share codes and datasets to build open foundation models, accelerate open science, and democratize deep learning for medical imaging.

如今,深度学习可以提供专家级,有时甚至是超专家级的性能,但要达到这样的性能,需要海量标注数据进行训练(例如,谷歌专有的 CXR 基础模型(CXR-FM)就是在 821,544 张标注且大多是私人的胸部 X 光片(CXR)上训练出来的)。医学影像领域有许多公开的数据集,但每个数据集的规模都很小,而且专家标签也不尽相同。我们设想通过汇集众多小型公共数据集,训练出一个强大而稳健的基础模型。为了实现这一愿景,我们开发了方舟,这是一个从各种数据集中的异构专家注释中积累和重用知识的框架。作为概念验证,我们通过合并多个数据集(包括 ChestX-ray14、CheXpert、MIMIC-II 和 VinDr-CXR),分别在 335,484 张和 704,363 张 CXR 上训练了两个 Ark 模型,并通过微调对它们进行了广泛的成像任务评估,包括分类和分割、并证明了我们的方舟比最先进的(SOTA)完全/自我监督基线和谷歌专有的 CXR-FM 性能更优越、更稳健。性能的提升归功于我们简单而有力的观察,即汇聚众多公共数据集可使患者群体多样化,并从不同专家那里积累知识,从而产生前所未有的性能,同时节省注释成本。随着所有代码和预训练模型在GitHub.com/JLiangLab/Ark上发布,我们希望方舟能对开放科学产生重要影响,因为从公共数据集的专家注释中积累和重用知识,有可能超越在异常大的数据上训练的专有模型的性能,激励全世界更多研究人员共享代码和数据集,以建立开放基础模型,加速开放科学,并使医学影像的深度学习民主化。
{"title":"Foundation Ark: Accruing and Reusing Knowledge for Superior and Robust Performance.","authors":"DongAo Ma, Jiaxuan Pang, Michael B Gotway, Jianming Liang","doi":"10.1007/978-3-031-43907-0_62","DOIUrl":"10.1007/978-3-031-43907-0_62","url":null,"abstract":"<p><p>Deep learning nowadays offers expert-level and sometimes even super-expert-level performance, but achieving such performance demands massive annotated data for training (e.g., Google's <i>proprietary</i> CXR Foundation Model (CXR-FM) was trained on 821,544 <i>labeled</i> and mostly <i>private</i> chest X-rays (CXRs)). <i>Numerous</i> datasets are <i>publicly</i> available in medical imaging but individually <i>small</i> and <i>heterogeneous</i> in expert labels. We envision a powerful and robust foundation model that can be trained by aggregating numerous small public datasets. To realize this vision, we have developed <b>Ark</b>, a framework that <b>a</b>ccrues and <b>r</b>euses <b>k</b>nowledge from <b>heterogeneous</b> expert annotations in various datasets. As a proof of concept, we have trained two Ark models on 335,484 and 704,363 CXRs, respectively, by merging several datasets including ChestX-ray14, CheXpert, MIMIC-II, and VinDr-CXR, evaluated them on a wide range of imaging tasks covering both classification and segmentation via fine-tuning, linear-probing, and gender-bias analysis, and demonstrated our Ark's superior and robust performance over the state-of-the-art (SOTA) fully/self-supervised baselines and Google's proprietary CXR-FM. This enhanced performance is attributed to our simple yet powerful observation that aggregating numerous public datasets diversifies patient populations and accrues knowledge from diverse experts, yielding unprecedented performance yet saving annotation cost. With all codes and pretrained models released at GitHub.com/JLiangLab/Ark, we hope that Ark exerts an important impact on open science, as accruing and reusing knowledge from expert annotations in public datasets can potentially surpass the performance of proprietary models trained on unusually large data, inspiring many more researchers worldwide to share codes and datasets to build open foundation models, accelerate open science, and democratize deep learning for medical imaging.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14220 ","pages":"651-662"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11095392/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140946796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Explainable Geometric-Weighted Graph Attention Network for Identifying Functional Networks Associated with Gait Impairment. 一个可解释的几何加权图注意网络识别与步态障碍相关的功能网络。
Favour Nerrise, Qingyu Zhao, Kathleen L Poston, Kilian M Pohl, Ehsan Adeli

One of the hallmark symptoms of Parkinson's Disease (PD) is the progressive loss of postural reflexes, which eventually leads to gait difficulties and balance problems. Identifying disruptions in brain function associated with gait impairment could be crucial in better understanding PD motor progression, thus advancing the development of more effective and personalized therapeutics. In this work, we present an explainable, geometric, weighted-graph attention neural network (xGW-GAT) to identify functional networks predictive of the progression of gait difficulties in individuals with PD. xGW-GAT predicts the multi-class gait impairment on the MDS-Unified PD Rating Scale (MDS-UPDRS). Our computational- and data-efficient model represents functional connectomes as symmetric positive definite (SPD) matrices on a Riemannian manifold to explicitly encode pairwise interactions of entire connectomes, based on which we learn an attention mask yielding individual- and group-level explainability. Applied to our resting-state functional MRI (rs-fMRI) dataset of individuals with PD, xGW-GAT identifies functional connectivity patterns associated with gait impairment in PD and offers interpretable explanations of functional subnetworks associated with motor impairment. Our model successfully outperforms several existing methods while simultaneously revealing clinically-relevant connectivity patterns. The source code is available at https://github.com/favour-nerrise/xGW-GAT.

帕金森病(PD)的标志性症状之一是姿势反射的逐渐丧失,最终导致步态困难和平衡问题。识别与步态障碍相关的脑功能中断对于更好地了解PD运动进展至关重要,从而促进更有效和个性化治疗的发展。在这项工作中,我们提出了一个可解释的、几何的、加权图的注意神经网络(xGW-GAT)来识别预测PD患者步态困难进展的功能网络。xGW-GAT预测mds -统一PD评定量表(MDS-UPDRS)的多等级步态障碍。我们的计算和数据效率模型将功能连接体表示为黎曼流形上的对称正定(SPD)矩阵,以显式编码整个连接体的成对相互作用,在此基础上,我们学习了一个产生个人和群体级别可解释性的注意掩模。xGW-GAT应用于PD患者的静息状态功能MRI (rs-fMRI)数据集,确定了PD患者与步态障碍相关的功能连接模式,并提供了与运动障碍相关的功能子网络的可解释性解释。我们的模型成功地超越了几种现有的方法,同时揭示了临床相关的连接模式。源代码可从https://github.com/favour-nerrise/xGW-GAT获得。
{"title":"An Explainable Geometric-Weighted Graph Attention Network for Identifying Functional Networks Associated with Gait Impairment.","authors":"Favour Nerrise, Qingyu Zhao, Kathleen L Poston, Kilian M Pohl, Ehsan Adeli","doi":"10.1007/978-3-031-43895-0_68","DOIUrl":"10.1007/978-3-031-43895-0_68","url":null,"abstract":"<p><p>One of the hallmark symptoms of Parkinson's Disease (PD) is the progressive loss of postural reflexes, which eventually leads to gait difficulties and balance problems. Identifying disruptions in brain function associated with gait impairment could be crucial in better understanding PD motor progression, thus advancing the development of more effective and personalized therapeutics. In this work, we present an explainable, geometric, weighted-graph attention neural network (<b>xGW-GAT</b>) to identify functional networks predictive of the progression of gait difficulties in individuals with PD. <b>xGW-GAT</b> predicts the multi-class gait impairment on the MDS-Unified PD Rating Scale (MDS-UPDRS). Our computational- and data-efficient model represents functional connectomes as symmetric positive definite (SPD) matrices on a Riemannian manifold to explicitly encode pairwise interactions of entire connectomes, based on which we learn an attention mask yielding individual- and group-level explainability. Applied to our resting-state functional MRI (rs-fMRI) dataset of individuals with PD, <b>xGW-GAT</b> identifies functional connectivity patterns associated with gait impairment in PD and offers interpretable explanations of functional subnetworks associated with motor impairment. Our model successfully outperforms several existing methods while simultaneously revealing clinically-relevant connectivity patterns. The source code is available at https://github.com/favour-nerrise/xGW-GAT.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14221 ","pages":"723-733"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10657737/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138049118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1