首页 > 最新文献

Medical image analysis最新文献

英文 中文
Personalized predictions of Glioblastoma infiltration: Mathematical models, Physics-Informed Neural Networks and multimodal scans. 胶质母细胞瘤浸润的个性化预测:数学模型、物理信息神经网络和多模态扫描。
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-12 DOI: 10.1016/j.media.2024.103423
Ray Zirui Zhang, Ivan Ezhov, Michal Balcerak, Andy Zhu, Benedikt Wiestler, Bjoern Menze, John S Lowengrub

Predicting the infiltration of Glioblastoma (GBM) from medical MRI scans is crucial for understanding tumor growth dynamics and designing personalized radiotherapy treatment plans. Mathematical models of GBM growth can complement the data in the prediction of spatial distributions of tumor cells. However, this requires estimating patient-specific parameters of the model from clinical data, which is a challenging inverse problem due to limited temporal data and the limited time between imaging and diagnosis. This work proposes a method that uses Physics-Informed Neural Networks (PINNs) to estimate patient-specific parameters of a reaction-diffusion partial differential equation (PDE) model of GBM growth from a single 3D structural MRI snapshot. PINNs embed both the data and the PDE into a loss function, thus integrating theory and data. Key innovations include the identification and estimation of characteristic non-dimensional parameters, a pre-training step that utilizes the non-dimensional parameters and a fine-tuning step to determine the patient specific parameters. Additionally, the diffuse-domain method is employed to handle the complex brain geometry within the PINN framework. The method is validated on both synthetic and patient datasets, showing promise for personalized GBM treatment through parametric inference within clinically relevant timeframes.

从医学MRI扫描中预测胶质母细胞瘤(GBM)的浸润对于了解肿瘤生长动力学和设计个性化放疗治疗计划至关重要。GBM生长的数学模型可以补充预测肿瘤细胞空间分布的数据。然而,这需要从临床数据中估计模型的患者特异性参数,这是一个具有挑战性的逆问题,因为时间数据有限,成像和诊断之间的时间有限。本研究提出了一种使用物理信息神经网络(pinn)的方法,从单个3D结构MRI快照中估计GBM生长的反应扩散偏微分方程(PDE)模型的患者特异性参数。pinn将数据和PDE嵌入到损失函数中,从而将理论和数据结合起来。关键创新包括特征无量纲参数的识别和估计,利用无量纲参数的预训练步骤和微调步骤来确定患者特定参数。此外,在PINN框架内,采用扩散域方法处理复杂的大脑几何结构。该方法在合成数据集和患者数据集上都得到了验证,通过在临床相关时间框架内进行参数推断,显示出个性化GBM治疗的希望。
{"title":"Personalized predictions of Glioblastoma infiltration: Mathematical models, Physics-Informed Neural Networks and multimodal scans.","authors":"Ray Zirui Zhang, Ivan Ezhov, Michal Balcerak, Andy Zhu, Benedikt Wiestler, Bjoern Menze, John S Lowengrub","doi":"10.1016/j.media.2024.103423","DOIUrl":"10.1016/j.media.2024.103423","url":null,"abstract":"<p><p>Predicting the infiltration of Glioblastoma (GBM) from medical MRI scans is crucial for understanding tumor growth dynamics and designing personalized radiotherapy treatment plans. Mathematical models of GBM growth can complement the data in the prediction of spatial distributions of tumor cells. However, this requires estimating patient-specific parameters of the model from clinical data, which is a challenging inverse problem due to limited temporal data and the limited time between imaging and diagnosis. This work proposes a method that uses Physics-Informed Neural Networks (PINNs) to estimate patient-specific parameters of a reaction-diffusion partial differential equation (PDE) model of GBM growth from a single 3D structural MRI snapshot. PINNs embed both the data and the PDE into a loss function, thus integrating theory and data. Key innovations include the identification and estimation of characteristic non-dimensional parameters, a pre-training step that utilizes the non-dimensional parameters and a fine-tuning step to determine the patient specific parameters. Additionally, the diffuse-domain method is employed to handle the complex brain geometry within the PINN framework. The method is validated on both synthetic and patient datasets, showing promise for personalized GBM treatment through parametric inference within clinically relevant timeframes.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103423"},"PeriodicalIF":10.7,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142864837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contrastive machine learning reveals species -shared and -specific brain functional architecture. 对比机器学习揭示了物种共享和特定的大脑功能结构。
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-12 DOI: 10.1016/j.media.2024.103431
Li Yang, Guannan Cao, Songyao Zhang, Weihan Zhang, Yusong Sun, Jingchao Zhou, Tianyang Zhong, Yixuan Yuan, Tao Liu, Tianming Liu, Lei Guo, Yongchun Yu, Xi Jiang, Gang Li, Junwei Han, Tuo Zhang

A deep comparative analysis of brain functional connectome across species in primates has the potential to yield valuable insights for both scientific and clinical applications. However, the interspecies commonality and differences are inherently entangled with each other and with other irrelevant factors. Here we develop a novel contrastive machine learning method, called shared-unique variation autoencoder (SU-VAE), to allow disentanglement of the species-shared and species-specific functional connectome variation between macaque and human brains on large-scale resting-state fMRI datasets. The method was validated by confirming that human-specific features are differentially related to cognitive scores, while features shared with macaque better capture sensorimotor ones. The projection of disentangled connectomes to the cortex revealed a gradient that reflected species divergence. In contrast to macaque, the introduction of human-specific connectomes to the shared ones enhanced network efficiency. We identified genes enriched on 'axon guidance' that could be related to the human-specific connectomes. The code contains the model and analysis can be found in https://github.com/BBBBrain/SU-VAE.

灵长类动物跨物种脑功能连接体的深入比较分析有可能为科学和临床应用提供有价值的见解。然而,种间的共性与差异是内在地相互纠缠在一起的,也与其他不相关因素纠缠在一起。在这里,我们开发了一种新的对比机器学习方法,称为共享唯一变异自编码器(SU-VAE),允许在大规模静息状态fMRI数据集上解开猕猴和人类大脑之间物种共享和物种特异性功能连接组变异的纠集。通过确认人类特有的特征与认知得分的差异相关,而与猕猴共有的特征更好地捕捉到感觉运动的特征,该方法得到了验证。解开的连接体向皮层的投影显示出一种反映物种分化的梯度。与猕猴相比,将人类特有的连接体引入共享的连接体,提高了网络效率。我们发现了富含“轴突引导”的基因,这些基因可能与人类特异性连接体有关。代码中包含的模型和分析可以在https://github.com/BBBBrain/SU-VAE中找到。
{"title":"Contrastive machine learning reveals species -shared and -specific brain functional architecture.","authors":"Li Yang, Guannan Cao, Songyao Zhang, Weihan Zhang, Yusong Sun, Jingchao Zhou, Tianyang Zhong, Yixuan Yuan, Tao Liu, Tianming Liu, Lei Guo, Yongchun Yu, Xi Jiang, Gang Li, Junwei Han, Tuo Zhang","doi":"10.1016/j.media.2024.103431","DOIUrl":"https://doi.org/10.1016/j.media.2024.103431","url":null,"abstract":"<p><p>A deep comparative analysis of brain functional connectome across species in primates has the potential to yield valuable insights for both scientific and clinical applications. However, the interspecies commonality and differences are inherently entangled with each other and with other irrelevant factors. Here we develop a novel contrastive machine learning method, called shared-unique variation autoencoder (SU-VAE), to allow disentanglement of the species-shared and species-specific functional connectome variation between macaque and human brains on large-scale resting-state fMRI datasets. The method was validated by confirming that human-specific features are differentially related to cognitive scores, while features shared with macaque better capture sensorimotor ones. The projection of disentangled connectomes to the cortex revealed a gradient that reflected species divergence. In contrast to macaque, the introduction of human-specific connectomes to the shared ones enhanced network efficiency. We identified genes enriched on 'axon guidance' that could be related to the human-specific connectomes. The code contains the model and analysis can be found in https://github.com/BBBBrain/SU-VAE.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103431"},"PeriodicalIF":10.7,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142846527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving cross-domain generalizability of medical image segmentation using uncertainty and shape-aware continual test-time domain adaptation. 利用不确定性和形状感知持续测试时间域自适应提高医学图像分割的跨域通用性。
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-10 DOI: 10.1016/j.media.2024.103422
Jiayi Zhu, Bart Bolsterlee, Yang Song, Erik Meijering

Continual test-time adaptation (CTTA) aims to continuously adapt a source-trained model to a target domain with minimal performance loss while assuming no access to the source data. Typically, source models are trained with empirical risk minimization (ERM) and assumed to perform reasonably on the target domain to allow for further adaptation. However, ERM-trained models often fail to perform adequately on a severely drifted target domain, resulting in unsatisfactory adaptation results. To tackle this issue, we propose a generalizable CTTA framework. First, we incorporate domain-invariant shape modeling into the model and train it using domain-generalization (DG) techniques, promoting target-domain adaptability regardless of the severity of the domain shift. Then, an uncertainty and shape-aware mean teacher network performs adaptation with uncertainty-weighted pseudo-labels and shape information. As part of this process, a novel uncertainty-ranked cross-task regularization scheme is proposed to impose consistency between segmentation maps and their corresponding shape representations, both produced by the student model, at the patch and global levels to enhance performance further. Lastly, small portions of the model's weights are stochastically reset to the initial domain-generalized state at each adaptation step, preventing the model from 'diving too deep' into any specific test samples. The proposed method demonstrates strong continual adaptability and outperforms its peers on five cross-domain segmentation tasks, showcasing its effectiveness and generalizability.

持续测试时适应(CTTA)旨在以最小的性能损失连续地使源训练的模型适应目标域,同时假设无法访问源数据。通常,源模型是用经验风险最小化(ERM)进行训练的,并假设在目标领域上合理地执行,以允许进一步的适应。然而,erm训练的模型经常不能在严重漂移的目标域上充分执行,导致不满意的适应结果。为了解决这个问题,我们提出了一个通用的CTTA框架。首先,我们将领域不变形状建模纳入到模型中,并使用领域泛化(DG)技术对其进行训练,无论领域转移的严重程度如何,都提高了目标领域的适应性。然后,一个具有不确定性和形状感知的均值教师网络利用不确定性加权伪标签和形状信息进行自适应。作为该过程的一部分,提出了一种新的不确定性排序的跨任务正则化方案,以在patch和全局级别上强制分割映射与其相应形状表示之间的一致性,以进一步提高性能。最后,在每个适应步骤中,模型的一小部分权重随机重置为初始域广义状态,防止模型“过于深入”到任何特定的测试样本中。该方法具有较强的持续适应性,在5个跨域分割任务上均优于同类方法,显示了其有效性和可泛化性。
{"title":"Improving cross-domain generalizability of medical image segmentation using uncertainty and shape-aware continual test-time domain adaptation.","authors":"Jiayi Zhu, Bart Bolsterlee, Yang Song, Erik Meijering","doi":"10.1016/j.media.2024.103422","DOIUrl":"https://doi.org/10.1016/j.media.2024.103422","url":null,"abstract":"<p><p>Continual test-time adaptation (CTTA) aims to continuously adapt a source-trained model to a target domain with minimal performance loss while assuming no access to the source data. Typically, source models are trained with empirical risk minimization (ERM) and assumed to perform reasonably on the target domain to allow for further adaptation. However, ERM-trained models often fail to perform adequately on a severely drifted target domain, resulting in unsatisfactory adaptation results. To tackle this issue, we propose a generalizable CTTA framework. First, we incorporate domain-invariant shape modeling into the model and train it using domain-generalization (DG) techniques, promoting target-domain adaptability regardless of the severity of the domain shift. Then, an uncertainty and shape-aware mean teacher network performs adaptation with uncertainty-weighted pseudo-labels and shape information. As part of this process, a novel uncertainty-ranked cross-task regularization scheme is proposed to impose consistency between segmentation maps and their corresponding shape representations, both produced by the student model, at the patch and global levels to enhance performance further. Lastly, small portions of the model's weights are stochastically reset to the initial domain-generalized state at each adaptation step, preventing the model from 'diving too deep' into any specific test samples. The proposed method demonstrates strong continual adaptability and outperforms its peers on five cross-domain segmentation tasks, showcasing its effectiveness and generalizability.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103422"},"PeriodicalIF":10.7,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142864795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MoMA: Momentum contrastive learning with multi-head attention-based knowledge distillation for histopathology image analysis. 基于多头注意的组织病理学图像知识蒸馏的动量对比学习。
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-09 DOI: 10.1016/j.media.2024.103421
Trinh Thi Le Vuong, Jin Tae Kwak

There is no doubt that advanced artificial intelligence models and high quality data are the keys to success in developing computational pathology tools. Although the overall volume of pathology data keeps increasing, a lack of quality data is a common issue when it comes to a specific task due to several reasons including privacy and ethical issues with patient data. In this work, we propose to exploit knowledge distillation, i.e., utilize the existing model to learn a new, target model, to overcome such issues in computational pathology. Specifically, we employ a student-teacher framework to learn a target model from a pre-trained, teacher model without direct access to source data and distill relevant knowledge via momentum contrastive learning with multi-head attention mechanism, which provides consistent and context-aware feature representations. This enables the target model to assimilate informative representations of the teacher model while seamlessly adapting to the unique nuances of the target data. The proposed method is rigorously evaluated across different scenarios where the teacher model was trained on the same, relevant, and irrelevant classification tasks with the target model. Experimental results demonstrate the accuracy and robustness of our approach in transferring knowledge to different domains and tasks, outperforming other related methods. Moreover, the results provide a guideline on the learning strategy for different types of tasks and scenarios in computational pathology.

毫无疑问,先进的人工智能模型和高质量的数据是开发计算病理学工具成功的关键。虽然病理数据的总量在不断增加,但由于患者数据的隐私和伦理问题等多种原因,缺乏高质量的数据是特定任务中的一个常见问题。在这项工作中,我们建议利用知识提炼(即利用现有模型学习新的目标模型)来克服计算病理学中的此类问题。具体来说,我们采用学生-教师框架,在不直接访问源数据的情况下,从预先训练好的教师模型中学习目标模型,并通过多头注意力机制的动量对比学习来提炼相关知识,从而提供一致且上下文感知的特征表征。这使得目标模型能够吸收教师模型的信息表征,同时无缝适应目标数据的独特细微差别。我们在不同的场景中对所提出的方法进行了严格评估,在这些场景中,教师模型与目标模型在相同、相关和不相关的分类任务中接受训练。实验结果表明,我们的方法在将知识迁移到不同领域和任务方面具有准确性和鲁棒性,优于其他相关方法。此外,实验结果还为计算病理学中不同类型任务和场景的学习策略提供了指导。
{"title":"MoMA: Momentum contrastive learning with multi-head attention-based knowledge distillation for histopathology image analysis.","authors":"Trinh Thi Le Vuong, Jin Tae Kwak","doi":"10.1016/j.media.2024.103421","DOIUrl":"https://doi.org/10.1016/j.media.2024.103421","url":null,"abstract":"<p><p>There is no doubt that advanced artificial intelligence models and high quality data are the keys to success in developing computational pathology tools. Although the overall volume of pathology data keeps increasing, a lack of quality data is a common issue when it comes to a specific task due to several reasons including privacy and ethical issues with patient data. In this work, we propose to exploit knowledge distillation, i.e., utilize the existing model to learn a new, target model, to overcome such issues in computational pathology. Specifically, we employ a student-teacher framework to learn a target model from a pre-trained, teacher model without direct access to source data and distill relevant knowledge via momentum contrastive learning with multi-head attention mechanism, which provides consistent and context-aware feature representations. This enables the target model to assimilate informative representations of the teacher model while seamlessly adapting to the unique nuances of the target data. The proposed method is rigorously evaluated across different scenarios where the teacher model was trained on the same, relevant, and irrelevant classification tasks with the target model. Experimental results demonstrate the accuracy and robustness of our approach in transferring knowledge to different domains and tasks, outperforming other related methods. Moreover, the results provide a guideline on the learning strategy for different types of tasks and scenarios in computational pathology.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103421"},"PeriodicalIF":10.7,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual-modality visual feature flow for medical report generation. 用于医疗报告生成的双模态视觉特征流。
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-01 DOI: 10.1016/j.media.2024.103413
Quan Tang, Liming Xu, Yongheng Wang, Bochuan Zheng, Jiancheng Lv, Xianhua Zeng, Weisheng Li

Medical report generation, a cross-modal task of generating medical text information, aiming to provide professional descriptions of medical images in clinical language. Despite some methods have made progress, there are still some limitations, including insufficient focus on lesion areas, omission of internal edge features, and difficulty in aligning cross-modal data. To address these issues, we propose Dual-Modality Visual Feature Flow (DMVF) for medical report generation. Firstly, we introduce region-level features based on grid-level features to enhance the method's ability to identify lesions and key areas. Then, we enhance two types of feature flows based on their attributes to prevent the loss of key information, respectively. Finally, we align visual mappings from different visual feature with report textual embeddings through a feature fusion module to perform cross-modal learning. Extensive experiments conducted on four benchmark datasets demonstrate that our approach outperforms the state-of-the-art methods in both natural language generation and clinical efficacy metrics.

医学报告生成是一种跨模态的医学文本信息生成任务,旨在用临床语言对医学图像进行专业描述。尽管一些方法取得了进展,但仍然存在一些局限性,包括对病变区域的关注不够,遗漏了内部边缘特征,难以对跨模态数据进行对齐。为了解决这些问题,我们提出了用于医疗报告生成的双模态视觉特征流(DMVF)。首先,在网格级特征的基础上引入区域级特征,增强方法对病灶和关键区域的识别能力;然后,我们根据特征流的属性对两类特征流进行增强,以防止关键信息的丢失。最后,我们通过特征融合模块将来自不同视觉特征的视觉映射与报告文本嵌入对齐,以进行跨模态学习。在四个基准数据集上进行的大量实验表明,我们的方法在自然语言生成和临床疗效指标方面都优于最先进的方法。
{"title":"Dual-modality visual feature flow for medical report generation.","authors":"Quan Tang, Liming Xu, Yongheng Wang, Bochuan Zheng, Jiancheng Lv, Xianhua Zeng, Weisheng Li","doi":"10.1016/j.media.2024.103413","DOIUrl":"https://doi.org/10.1016/j.media.2024.103413","url":null,"abstract":"<p><p>Medical report generation, a cross-modal task of generating medical text information, aiming to provide professional descriptions of medical images in clinical language. Despite some methods have made progress, there are still some limitations, including insufficient focus on lesion areas, omission of internal edge features, and difficulty in aligning cross-modal data. To address these issues, we propose Dual-Modality Visual Feature Flow (DMVF) for medical report generation. Firstly, we introduce region-level features based on grid-level features to enhance the method's ability to identify lesions and key areas. Then, we enhance two types of feature flows based on their attributes to prevent the loss of key information, respectively. Finally, we align visual mappings from different visual feature with report textual embeddings through a feature fusion module to perform cross-modal learning. Extensive experiments conducted on four benchmark datasets demonstrate that our approach outperforms the state-of-the-art methods in both natural language generation and clinical efficacy metrics.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103413"},"PeriodicalIF":10.7,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142853978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative benchmarking of failure detection methods in medical image segmentation: Unveiling the role of confidence aggregation. 失效检测方法在医学图像分割中的比较基准:揭示置信聚集的作用。
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-30 DOI: 10.1016/j.media.2024.103392
Maximilian Zenk, David Zimmerer, Fabian Isensee, Jeremias Traub, Tobias Norajitra, Paul F Jäger, Klaus Maier-Hein

Semantic segmentation is an essential component of medical image analysis research, with recent deep learning algorithms offering out-of-the-box applicability across diverse datasets. Despite these advancements, segmentation failures remain a significant concern for real-world clinical applications, necessitating reliable detection mechanisms. This paper introduces a comprehensive benchmarking framework aimed at evaluating failure detection methodologies within medical image segmentation. Through our analysis, we identify the strengths and limitations of current failure detection metrics, advocating for the risk-coverage analysis as a holistic evaluation approach. Utilizing a collective dataset comprising five public 3D medical image collections, we assess the efficacy of various failure detection strategies under realistic test-time distribution shifts. Our findings highlight the importance of pixel confidence aggregation and we observe superior performance of the pairwise Dice score (Roy et al., 2019) between ensemble predictions, positioning it as a simple and robust baseline for failure detection in medical image segmentation. To promote ongoing research, we make the benchmarking framework available to the community.

语义分割是医学图像分析研究的重要组成部分,最近的深度学习算法提供了跨不同数据集的开箱即用的适用性。尽管有这些进步,分割失败仍然是现实世界临床应用的重要问题,需要可靠的检测机制。本文介绍了一个全面的基准框架,旨在评估医学图像分割中的故障检测方法。通过我们的分析,我们确定了当前故障检测度量的优势和局限性,倡导将风险覆盖分析作为一种整体评估方法。利用包含五个公共3D医学图像集合的集体数据集,我们评估了在实际测试时间分布变化下各种故障检测策略的有效性。我们的研究结果强调了像素置信度聚合的重要性,并且我们观察到在集成预测之间的成对Dice分数(Roy等人,2019)具有优异的性能,将其定位为医学图像分割中故障检测的简单而稳健的基线。为了促进正在进行的研究,我们向社区提供基准框架。
{"title":"Comparative benchmarking of failure detection methods in medical image segmentation: Unveiling the role of confidence aggregation.","authors":"Maximilian Zenk, David Zimmerer, Fabian Isensee, Jeremias Traub, Tobias Norajitra, Paul F Jäger, Klaus Maier-Hein","doi":"10.1016/j.media.2024.103392","DOIUrl":"https://doi.org/10.1016/j.media.2024.103392","url":null,"abstract":"<p><p>Semantic segmentation is an essential component of medical image analysis research, with recent deep learning algorithms offering out-of-the-box applicability across diverse datasets. Despite these advancements, segmentation failures remain a significant concern for real-world clinical applications, necessitating reliable detection mechanisms. This paper introduces a comprehensive benchmarking framework aimed at evaluating failure detection methodologies within medical image segmentation. Through our analysis, we identify the strengths and limitations of current failure detection metrics, advocating for the risk-coverage analysis as a holistic evaluation approach. Utilizing a collective dataset comprising five public 3D medical image collections, we assess the efficacy of various failure detection strategies under realistic test-time distribution shifts. Our findings highlight the importance of pixel confidence aggregation and we observe superior performance of the pairwise Dice score (Roy et al., 2019) between ensemble predictions, positioning it as a simple and robust baseline for failure detection in medical image segmentation. To promote ongoing research, we make the benchmarking framework available to the community.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103392"},"PeriodicalIF":10.7,"publicationDate":"2024-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142807657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward automated detection of microbleeds with anatomical scale localization using deep learning 利用深度学习实现解剖尺度定位的微出血自动检测
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-30 DOI: 10.1016/j.media.2024.103415
Jun-Ho Kim, Young Noh, Haejoon Lee, Seul Lee, Woo-Ram Kim, Koung Mi Kang, Eung Yeop Kim, Mohammed A. Al-masni, Dong-Hyun Kim
Cerebral Microbleeds (CMBs) are chronic deposits of small blood products in the brain tissues, which have explicit relation to various cerebrovascular diseases depending on their anatomical location, including cognitive decline, intracerebral hemorrhage, and cerebral infarction. However, manual detection of CMBs is a time consuming and error-prone process because of their sparse and tiny structural properties. The detection of CMBs is commonly affected by the presence of many CMB mimics that cause a high false-positive rate (FPR), such as calcifications and pial vessels. This paper proposes a novel 3D deep learning framework that not only detects CMBs but also identifies their anatomical location in the brain (i.e., lobar, deep, and infratentorial regions). For the CMBs detection task, we propose a single end-to-end model by leveraging the 3D U-Net as a backbone with Region Proposal Network (RPN). To significantly reduce the false positives within the same single model, we develop a new scheme, containing Feature Fusion Module (FFM) that detects small candidates utilizing contextual information and Hard Sample Prototype Learning (HSPL) that mines CMB mimics and generates additional loss term called concentration loss using Convolutional Prototype Learning (CPL). For the anatomical localization task, we exploit the 3D U-Net segmentation network to segment anatomical structures of the brain. This task not only identifies to which region the CMBs belong but also eliminates some false positives from the detection task by leveraging anatomical information. We utilize Susceptibility-Weighted Imaging (SWI) and phase images as 3D input to efficiently capture 3D information. The results show that the proposed RPN that utilizes the FFM and HSPL outperforms the baseline RPN and achieves a sensitivity of 94.66 % vs. 93.33 % and an average number of false positives per subject (FPavg) of 0.86 vs. 14.73. Furthermore, the anatomical localization task enhances the detection performance by reducing the FPavg to 0.56 while maintaining the sensitivity of 94.66 %.
脑微出血(Cerebral micro出血,CMBs)是脑组织内小血制品的慢性沉积,其与认知能力下降、脑出血、脑梗死等多种脑血管疾病的解剖位置有明确关系。然而,由于宇宙微波背景粒子的稀疏和微小的结构特性,人工检测是一个耗时且容易出错的过程。CMB的检测通常受到许多CMB模拟物的影响,这些模拟物会导致高假阳性率(FPR),如钙化和脑脊液血管。本文提出了一种新的3D深度学习框架,该框架不仅可以检测CMBs,还可以识别它们在大脑中的解剖位置(即脑叶、脑深部和幕下区域)。对于CMBs检测任务,我们提出了一个单一的端到端模型,利用3D U-Net作为区域提议网络(RPN)的骨干。为了显著减少同一单一模型中的假阳性,我们开发了一种新的方案,其中包含特征融合模块(FFM),该模块利用上下文信息和硬样本原型学习(HSPL)来检测小候选对象,该模块挖掘CMB模拟并使用卷积原型学习(CPL)生成称为浓度损失的额外损失项。对于解剖定位任务,我们利用三维U-Net分割网络对大脑解剖结构进行分割。该任务不仅可以识别CMBs属于哪个区域,还可以利用解剖信息消除检测任务中的一些误报。我们利用磁化率加权成像(SWI)和相位图像作为三维输入,有效地捕获三维信息。结果表明,利用FFM和HSPL的RPN优于基线RPN,灵敏度为94.66%比93.33%,每个受试者的平均假阳性数(FPavg)为0.86比14.73。此外,解剖定位任务将FPavg降低到0.56,同时保持94.66%的灵敏度,从而提高了检测性能。
{"title":"Toward automated detection of microbleeds with anatomical scale localization using deep learning","authors":"Jun-Ho Kim, Young Noh, Haejoon Lee, Seul Lee, Woo-Ram Kim, Koung Mi Kang, Eung Yeop Kim, Mohammed A. Al-masni, Dong-Hyun Kim","doi":"10.1016/j.media.2024.103415","DOIUrl":"https://doi.org/10.1016/j.media.2024.103415","url":null,"abstract":"Cerebral Microbleeds (CMBs) are chronic deposits of small blood products in the brain tissues, which have explicit relation to various cerebrovascular diseases depending on their anatomical location, including cognitive decline, intracerebral hemorrhage, and cerebral infarction. However, manual detection of CMBs is a time consuming and error-prone process because of their sparse and tiny structural properties. The detection of CMBs is commonly affected by the presence of many CMB mimics that cause a high false-positive rate (FPR), such as calcifications and pial vessels. This paper proposes a novel 3D deep learning framework that not only detects CMBs but also identifies their anatomical location in the brain (i.e., lobar, deep, and infratentorial regions). For the CMBs detection task, we propose a single end-to-end model by leveraging the 3D U-Net as a backbone with Region Proposal Network (RPN). To significantly reduce the false positives within the same single model, we develop a new scheme, containing Feature Fusion Module (FFM) that detects small candidates utilizing contextual information and Hard Sample Prototype Learning (HSPL) that mines CMB mimics and generates additional loss term called concentration loss using Convolutional Prototype Learning (CPL). For the anatomical localization task, we exploit the 3D U-Net segmentation network to segment anatomical structures of the brain. This task not only identifies to which region the CMBs belong but also eliminates some false positives from the detection task by leveraging anatomical information. We utilize Susceptibility-Weighted Imaging (SWI) and phase images as 3D input to efficiently capture 3D information. The results show that the proposed RPN that utilizes the FFM and HSPL outperforms the baseline RPN and achieves a sensitivity of 94.66 % vs. 93.33 % and an average number of false positives per subject (FP<ce:inf loc=\"post\">avg</ce:inf>) of 0.86 vs. 14.73. Furthermore, the anatomical localization task enhances the detection performance by reducing the FP<ce:inf loc=\"post\">avg</ce:inf> to 0.56 while maintaining the sensitivity of 94.66 %.","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"11 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2024-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142789978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Outlier detection in cardiac diffusion tensor imaging: Shot rejection or robust fitting? 心脏弥散张量成像的异常值检测:排斥还是稳健拟合?
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-30 DOI: 10.1016/j.media.2024.103386
Sam Coveney, Maryam Afzali, Lars Mueller, Irvin Teh, Arka Das, Erica Dall'Armellina, Filip Szczepankiewicz, Derek K Jones, Jurgen E Schneider

Cardiac diffusion tensor imaging (cDTI) is highly prone to image corruption, yet robust-fitting methods are rarely used. Single voxel outlier detection (SVOD) can overlook corruptions that are visually obvious, perhaps causing reluctance to replace whole-image shot-rejection (SR) despite its own deficiencies. SVOD's deficiencies may be relatively unimportant: corrupted signals that are not statistical outliers may not be detrimental. Multiple voxel outlier detection (MVOD), using a local myocardial neighbourhood, may overcome the shared deficiencies of SR and SVOD for cDTI while keeping the benefits of both. Here, robust fitting methods using M-estimators are derived for both non-linear least squares and weighted least squares fitting, and outlier detection is applied using (i) SVOD; and (ii) SVOD and MVOD. These methods, along with non-robust fitting with/without SR, are applied to cDTI datasets from healthy volunteers and hypertrophic cardiomyopathy patients. Robust fitting methods produce larger group differences with more statistical significance for MD, FA, and E2A, versus non-robust methods, with MVOD giving the largest group differences for MD and FA. Visual analysis demonstrates the superiority of robust-fitting methods over SR, especially when it is difficult to partition the images into good and bad sets. Synthetic experiments confirm that MVOD gives lower root-mean-square-error than SVOD.

心脏弥散张量成像(cDTI)极易出现图像损坏,但却很少使用稳健拟合方法。单体素离群点检测(SVOD)可能会忽略视觉上明显的损坏,这也许是不愿意取代全图像镜头剔除(SR)的原因,尽管它本身存在缺陷。SVOD 的缺陷可能相对来说并不重要:不是统计异常值的损坏信号可能不会造成损害。使用局部心肌邻域的多体素离群点检测(MVOD)可以克服 cDTI SR 和 SVOD 的共同缺陷,同时保留两者的优点。本文针对非线性最小二乘法和加权最小二乘法拟合,推导出了使用 M 估计器的稳健拟合方法,并使用(i)SVOD 和(ii)SVOD 和 MVOD 进行离群点检测。这些方法以及有/无 SR 的非稳健拟合方法被应用于健康志愿者和肥厚型心肌病患者的 cDTI 数据集。与非稳健拟合方法相比,稳健拟合方法在 MD、FA 和 E2A 方面产生的组间差异更大,更具有统计学意义,其中 MVOD 在 MD 和 FA 方面产生的组间差异最大。直观分析表明,鲁棒拟合方法优于 SR 方法,尤其是在难以将图像分为好图像集和坏图像集的情况下。合成实验证实,MVOD 的均方根误差低于 SVOD。
{"title":"Outlier detection in cardiac diffusion tensor imaging: Shot rejection or robust fitting?","authors":"Sam Coveney, Maryam Afzali, Lars Mueller, Irvin Teh, Arka Das, Erica Dall'Armellina, Filip Szczepankiewicz, Derek K Jones, Jurgen E Schneider","doi":"10.1016/j.media.2024.103386","DOIUrl":"https://doi.org/10.1016/j.media.2024.103386","url":null,"abstract":"<p><p>Cardiac diffusion tensor imaging (cDTI) is highly prone to image corruption, yet robust-fitting methods are rarely used. Single voxel outlier detection (SVOD) can overlook corruptions that are visually obvious, perhaps causing reluctance to replace whole-image shot-rejection (SR) despite its own deficiencies. SVOD's deficiencies may be relatively unimportant: corrupted signals that are not statistical outliers may not be detrimental. Multiple voxel outlier detection (MVOD), using a local myocardial neighbourhood, may overcome the shared deficiencies of SR and SVOD for cDTI while keeping the benefits of both. Here, robust fitting methods using M-estimators are derived for both non-linear least squares and weighted least squares fitting, and outlier detection is applied using (i) SVOD; and (ii) SVOD and MVOD. These methods, along with non-robust fitting with/without SR, are applied to cDTI datasets from healthy volunteers and hypertrophic cardiomyopathy patients. Robust fitting methods produce larger group differences with more statistical significance for MD, FA, and E2A, versus non-robust methods, with MVOD giving the largest group differences for MD and FA. Visual analysis demonstrates the superiority of robust-fitting methods over SR, especially when it is difficult to partition the images into good and bad sets. Synthetic experiments confirm that MVOD gives lower root-mean-square-error than SVOD.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103386"},"PeriodicalIF":10.7,"publicationDate":"2024-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142818560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-supervised graph contrastive learning with diffusion augmentation for functional MRI analysis and brain disorder detection. 基于扩散增强的自监督图对比学习在功能性MRI分析和脑障碍检测中的应用。
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-29 DOI: 10.1016/j.media.2024.103403
Xiaochuan Wang, Yuqi Fang, Qianqian Wang, Pew-Thian Yap, Hongtu Zhu, Mingxia Liu

Resting-state functional magnetic resonance imaging (rs-fMRI) provides a non-invasive imaging technique to study patterns of brain activity, and is increasingly used to facilitate automated brain disorder analysis. Existing fMRI-based learning methods often rely on labeled data to construct learning models, while the data annotation process typically requires significant time and resource investment. Graph contrastive learning offers a promising solution to address the small labeled data issue, by augmenting fMRI time series for self-supervised learning. However, data augmentation strategies employed in these approaches may damage the original blood-oxygen-level-dependent (BOLD) signals, thus hindering subsequent fMRI feature extraction. In this paper, we propose a self-supervised graph contrastive learning framework with diffusion augmentation (GCDA) for functional MRI analysis. The GCDA consists of a pretext model and a task-specific model. In the pretext model, we first augment each brain functional connectivity network derived from fMRI through a graph diffusion augmentation (GDA) module, and then use two graph isomorphism networks with shared parameters to extract features in a self-supervised contrastive learning manner. The pretext model can be optimized without the need for labeled training data, while the GDA focuses on perturbing graph edges and nodes, thus preserving the integrity of original BOLD signals. The task-specific model involves fine-tuning the trained pretext model to adapt to downstream tasks. Experimental results on two rs-fMRI cohorts with a total of 1230 subjects demonstrate the effectiveness of our method compared with several state-of-the-arts.

静息状态功能磁共振成像(rs-fMRI)为研究大脑活动模式提供了一种非侵入性成像技术,并越来越多地用于促进大脑疾病的自动化分析。现有的基于fmri的学习方法往往依赖于标记数据来构建学习模型,而数据标注过程通常需要大量的时间和资源投入。图对比学习通过增强fMRI时间序列进行自监督学习,为解决小标记数据问题提供了一个有希望的解决方案。然而,在这些方法中采用的数据增强策略可能会破坏原始的血氧水平依赖(BOLD)信号,从而阻碍后续的fMRI特征提取。在本文中,我们提出了一个带有扩散增强的自监督图对比学习框架(GCDA)用于功能MRI分析。GCDA包括借口模型和特定任务模型。在托词模型中,我们首先通过图扩散增强(GDA)模块增强fMRI得到的每个脑功能连接网络,然后使用两个共享参数的图同构网络以自监督对比学习的方式提取特征。借口模型可以在不需要标记训练数据的情况下进行优化,而GDA侧重于对图边和节点进行扰动,从而保持原始BOLD信号的完整性。特定于任务的模型包括对训练好的借口模型进行微调,以适应下游任务。两个rs-fMRI队列共1230名受试者的实验结果表明,与几种最先进的方法相比,我们的方法是有效的。
{"title":"Self-supervised graph contrastive learning with diffusion augmentation for functional MRI analysis and brain disorder detection.","authors":"Xiaochuan Wang, Yuqi Fang, Qianqian Wang, Pew-Thian Yap, Hongtu Zhu, Mingxia Liu","doi":"10.1016/j.media.2024.103403","DOIUrl":"10.1016/j.media.2024.103403","url":null,"abstract":"<p><p>Resting-state functional magnetic resonance imaging (rs-fMRI) provides a non-invasive imaging technique to study patterns of brain activity, and is increasingly used to facilitate automated brain disorder analysis. Existing fMRI-based learning methods often rely on labeled data to construct learning models, while the data annotation process typically requires significant time and resource investment. Graph contrastive learning offers a promising solution to address the small labeled data issue, by augmenting fMRI time series for self-supervised learning. However, data augmentation strategies employed in these approaches may damage the original blood-oxygen-level-dependent (BOLD) signals, thus hindering subsequent fMRI feature extraction. In this paper, we propose a self-supervised graph contrastive learning framework with diffusion augmentation (GCDA) for functional MRI analysis. The GCDA consists of a pretext model and a task-specific model. In the pretext model, we first augment each brain functional connectivity network derived from fMRI through a graph diffusion augmentation (GDA) module, and then use two graph isomorphism networks with shared parameters to extract features in a self-supervised contrastive learning manner. The pretext model can be optimized without the need for labeled training data, while the GDA focuses on perturbing graph edges and nodes, thus preserving the integrity of original BOLD signals. The task-specific model involves fine-tuning the trained pretext model to adapt to downstream tasks. Experimental results on two rs-fMRI cohorts with a total of 1230 subjects demonstrate the effectiveness of our method compared with several state-of-the-arts.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103403"},"PeriodicalIF":10.7,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142786172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
COLLATOR: Consistent spatial–temporal longitudinal atlas construction via implicit neural representation COLLATOR:通过内隐神经表征构建一致的时空纵向地图集
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-28 DOI: 10.1016/j.media.2024.103396
Lixuan Chen, Xuanyu Tian, Jiangjie Wu, Guoyan Lao, Yuyao Zhang, Hongjiang Wei
Longitudinal brain atlases that present brain development trend along time, are essential tools for brain development studies. However, conventional methods construct these atlases by independently averaging brain images from different individuals at discrete time points. This approach could introduce temporal inconsistencies due to variations in ontogenetic trends among samples, potentially affecting accuracy of brain developmental characteristic analysis. In this paper, we propose an implicit neural representation (INR)-based framework to improve the temporal consistency in longitudinal atlases. We treat temporal inconsistency as a 4-dimensional (4D) image denoising task, where the data consists of 3D spatial information and 1D temporal progression. We formulate the longitudinal atlas as an implicit function of the spatial–temporal coordinates, allowing structural inconsistency over the time to be considered as 3D image noise along age. Inspired by recent self-supervised denoising methods (e.g. Noise2Noise), our approach learns the noise-free and temporally continuous implicit function from inconsistent longitudinal atlas data. Finally, the time-consistent longitudinal brain atlas can be reconstructed by evaluating the denoised 4D INR function at critical brain developing time points. We evaluate our approach on three longitudinal brain atlases of different MRI modalities, demonstrating that our method significantly improves temporal consistency while accurately preserving brain structures. Additionally, the continuous functions generated by our method enable the creation of 4D atlases with higher spatial and temporal resolution. Code: https://github.com/maopaom/COLLATOR.
纵向脑地图集是脑发育研究的重要工具,它反映了脑的长期发展趋势。然而,传统的方法是通过在离散时间点对不同个体的脑图像进行独立平均来构建这些地图集。这种方法可能由于样本之间个体发生趋势的变化而导致时间不一致,潜在地影响大脑发育特征分析的准确性。在本文中,我们提出了一个基于隐式神经表示(INR)的框架来提高纵向地图集的时间一致性。我们将时间不一致性视为一个4维(4D)图像去噪任务,其中数据由3D空间信息和1D时间进展组成。我们将纵向地图集表述为时空坐标的隐式函数,允许在时间上的结构不一致被视为随年龄变化的3D图像噪声。受最近的自监督去噪方法(例如Noise2Noise)的启发,我们的方法从不一致的纵向地图集数据中学习无噪声和时间连续的隐式函数。最后,通过对脑发育关键时间点去噪后的4D INR函数进行评估,重建时间一致的纵向脑图谱。我们在三个不同MRI模式的纵向脑图谱上评估了我们的方法,证明我们的方法显着提高了时间一致性,同时准确地保留了大脑结构。此外,我们的方法生成的连续函数可以创建具有更高时空分辨率的四维地图集。代码:https://github.com/maopaom/COLLATOR。
{"title":"COLLATOR: Consistent spatial–temporal longitudinal atlas construction via implicit neural representation","authors":"Lixuan Chen, Xuanyu Tian, Jiangjie Wu, Guoyan Lao, Yuyao Zhang, Hongjiang Wei","doi":"10.1016/j.media.2024.103396","DOIUrl":"https://doi.org/10.1016/j.media.2024.103396","url":null,"abstract":"Longitudinal brain atlases that present brain development trend along time, are essential tools for brain development studies. However, conventional methods construct these atlases by independently averaging brain images from different individuals at discrete time points. This approach could introduce temporal inconsistencies due to variations in ontogenetic trends among samples, potentially affecting accuracy of brain developmental characteristic analysis. In this paper, we propose an implicit neural representation (INR)-based framework to improve the temporal consistency in longitudinal atlases. We treat temporal inconsistency as a 4-dimensional (4D) image denoising task, where the data consists of 3D spatial information and 1D temporal progression. We formulate the longitudinal atlas as an implicit function of the spatial–temporal coordinates, allowing structural inconsistency over the time to be considered as 3D image noise along age. Inspired by recent self-supervised denoising methods (e.g. Noise2Noise), our approach learns the noise-free and temporally continuous implicit function from inconsistent longitudinal atlas data. Finally, the time-consistent longitudinal brain atlas can be reconstructed by evaluating the denoised 4D INR function at critical brain developing time points. We evaluate our approach on three longitudinal brain atlases of different MRI modalities, demonstrating that our method significantly improves temporal consistency while accurately preserving brain structures. Additionally, the continuous functions generated by our method enable the creation of 4D atlases with higher spatial and temporal resolution. Code: <ce:inter-ref xlink:href=\"https://github.com/maopaom/COLLATOR\" xlink:type=\"simple\">https://github.com/maopaom/COLLATOR</ce:inter-ref>.","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"1 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142789979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Medical image analysis
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1