首页 > 最新文献

Medical image analysis最新文献

英文 中文
Probabilistic learning of the Purkinje network from the electrocardiogram
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-21 DOI: 10.1016/j.media.2025.103460
Felipe Álvarez-Barrientos , Mariana Salinas-Camus , Simone Pezzuto , Francisco Sahli Costabal
The identification of the Purkinje conduction system in the heart is a challenging task, yet essential for a correct definition of cardiac digital twins for precision cardiology. Here, we propose a probabilistic approach for identifying the Purkinje network from non-invasive clinical data such as the standard electrocardiogram (ECG). We use cardiac imaging to build an anatomically accurate model of the ventricles; we algorithmically generate a rule-based Purkinje network tailored to the anatomy; we simulate physiological electrocardiograms with a fast model; we identify the geometrical and electrical parameters of the Purkinje-ECG model with Bayesian optimization and approximate Bayesian computation. The proposed approach is inherently probabilistic and generates a population of plausible Purkinje networks, all fitting the ECG within a given tolerance. In this way, we can estimate the uncertainty of the parameters, thus providing reliable predictions. We test our methodology in physiological and pathological scenarios, showing that we are able to accurately recover the ECG with our model. We propagate the uncertainty in the Purkinje network parameters in a simulation of conduction system pacing therapy. Our methodology is a step forward in creation of digital twins from non-invasive data in precision medicine. An open source implementation can be found at http://github.com/fsahli/purkinje-learning.
{"title":"Probabilistic learning of the Purkinje network from the electrocardiogram","authors":"Felipe Álvarez-Barrientos ,&nbsp;Mariana Salinas-Camus ,&nbsp;Simone Pezzuto ,&nbsp;Francisco Sahli Costabal","doi":"10.1016/j.media.2025.103460","DOIUrl":"10.1016/j.media.2025.103460","url":null,"abstract":"<div><div>The identification of the Purkinje conduction system in the heart is a challenging task, yet essential for a correct definition of cardiac digital twins for precision cardiology. Here, we propose a probabilistic approach for identifying the Purkinje network from non-invasive clinical data such as the standard electrocardiogram (ECG). We use cardiac imaging to build an anatomically accurate model of the ventricles; we algorithmically generate a rule-based Purkinje network tailored to the anatomy; we simulate physiological electrocardiograms with a fast model; we identify the geometrical and electrical parameters of the Purkinje-ECG model with Bayesian optimization and approximate Bayesian computation. The proposed approach is inherently probabilistic and generates a population of plausible Purkinje networks, all fitting the ECG within a given tolerance. In this way, we can estimate the uncertainty of the parameters, thus providing reliable predictions. We test our methodology in physiological and pathological scenarios, showing that we are able to accurately recover the ECG with our model. We propagate the uncertainty in the Purkinje network parameters in a simulation of conduction system pacing therapy. Our methodology is a step forward in creation of digital twins from non-invasive data in precision medicine. An open source implementation can be found at <span><span>http://github.com/fsahli/purkinje-learning</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103460"},"PeriodicalIF":10.7,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143066469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Personalized topology-informed localization of standard 12-lead ECG electrode placement from incomplete cardiac MRIs for efficient cardiac digital twins
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-21 DOI: 10.1016/j.media.2025.103472
Lei Li , Hannah Smith , Yilin Lyu , Julia Camps , Shuang Qian , Blanca Rodriguez , Abhirup Banerjee , Vicente Grau
Cardiac digital twins (CDTs) offer personalized in-silico cardiac representations for the inference of multi-scale properties tied to cardiac mechanisms. The creation of CDTs requires precise information about the electrode position on the torso, especially for the personalized electrocardiogram (ECG) calibration. However, current studies commonly rely on additional acquisition of torso imaging and manual/semi-automatic methods for ECG electrode localization. In this study, we propose a novel and efficient topology-informed model to fully automatically extract personalized ECG standard electrode locations from 2D clinically standard cardiac MRIs. Specifically, we obtain the sparse torso contours from the cardiac MRIs and then localize the standard electrodes of 12-lead ECG from the contours. Cardiac MRIs aim at imaging of the heart instead of the torso, leading to incomplete torso geometry within the imaging. To tackle the missing topology, we incorporate the electrodes as a subset of the keypoints, which can be explicitly aligned with the 3D torso topology. The experimental results demonstrate that the proposed model outperforms the time-consuming conventional model projection-based method in terms of accuracy (Euclidean distance: 1.24±0.293 cm vs. 1.48±0.362 cm) and efficiency (2 s vs. 30-35 min). We further demonstrate the effectiveness of using the detected electrodes for in-silico ECG simulation, highlighting their potential for creating accurate and efficient CDT models. The code is available at https://github.com/lileitech/12lead_ECG_electrode_localizer.
心脏数字双胞胎(CDTs)为推断与心脏机制相关的多尺度属性提供了个性化的样本内心脏表征。创建 CDTs 需要躯干上电极位置的精确信息,特别是用于个性化心电图(ECG)校准。然而,目前的研究通常依赖于额外的躯干成像采集和手动/半自动的心电图电极定位方法。在本研究中,我们提出了一种新颖高效的拓扑信息模型,可从二维临床标准心脏磁共振成像中全自动提取个性化心电图标准电极位置。具体来说,我们从心脏核磁共振图像中获取稀疏的躯干轮廓,然后根据轮廓定位 12 导联心电图的标准电极。心脏核磁共振成像的目标是心脏而不是躯干,这导致成像中的躯干几何形状不完整。为了解决拓扑结构缺失的问题,我们将电极作为关键点的一个子集,这样就可以明确地与三维躯干拓扑结构对齐。实验结果表明,所提出的模型在准确性(欧氏距离:1.24±0.293 厘米对 1.48±0.362 厘米)和效率(2 秒对 30-35 分钟)方面都优于耗时的传统基于模型投影的方法。我们还进一步证明了使用检测到的电极进行内模拟心电图的有效性,凸显了它们在创建准确、高效的 CDT 模型方面的潜力。代码见 https://github.com/lileitech/12lead_ECG_electrode_localizer。
{"title":"Personalized topology-informed localization of standard 12-lead ECG electrode placement from incomplete cardiac MRIs for efficient cardiac digital twins","authors":"Lei Li ,&nbsp;Hannah Smith ,&nbsp;Yilin Lyu ,&nbsp;Julia Camps ,&nbsp;Shuang Qian ,&nbsp;Blanca Rodriguez ,&nbsp;Abhirup Banerjee ,&nbsp;Vicente Grau","doi":"10.1016/j.media.2025.103472","DOIUrl":"10.1016/j.media.2025.103472","url":null,"abstract":"<div><div>Cardiac digital twins (CDTs) offer personalized <em>in-silico</em> cardiac representations for the inference of multi-scale properties tied to cardiac mechanisms. The creation of CDTs requires precise information about the electrode position on the torso, especially for the personalized electrocardiogram (ECG) calibration. However, current studies commonly rely on additional acquisition of torso imaging and manual/semi-automatic methods for ECG electrode localization. In this study, we propose a novel and efficient topology-informed model to fully automatically extract personalized ECG standard electrode locations from 2D clinically standard cardiac MRIs. Specifically, we obtain the sparse torso contours from the cardiac MRIs and then localize the standard electrodes of 12-lead ECG from the contours. Cardiac MRIs aim at imaging of the heart instead of the torso, leading to incomplete torso geometry within the imaging. To tackle the missing topology, we incorporate the electrodes as a subset of the keypoints, which can be explicitly aligned with the 3D torso topology. The experimental results demonstrate that the proposed model outperforms the time-consuming conventional model projection-based method in terms of accuracy (Euclidean distance: <span><math><mrow><mn>1</mn><mo>.</mo><mn>24</mn><mo>±</mo><mn>0</mn><mo>.</mo><mn>293</mn></mrow></math></span> cm vs. <span><math><mrow><mn>1</mn><mo>.</mo><mn>48</mn><mo>±</mo><mn>0</mn><mo>.</mo><mn>362</mn></mrow></math></span> cm) and efficiency (2 s vs. 30-35 min). We further demonstrate the effectiveness of using the detected electrodes for <em>in-silico</em> ECG simulation, highlighting their potential for creating accurate and efficient CDT models. The code is available at <span><span>https://github.com/lileitech/12lead_ECG_electrode_localizer</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103472"},"PeriodicalIF":10.7,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143035199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TractGraphFormer: Anatomically informed hybrid graph CNN-transformer network for interpretable sex and age prediction from diffusion MRI tractography
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-20 DOI: 10.1016/j.media.2025.103476
Yuqian Chen , Fan Zhang , Meng Wang , Leo R. Zekelman , Suheyla Cetin-Karayumak , Tengfei Xue , Chaoyi Zhang , Yang Song , Jarrett Rushmore , Nikos Makris , Yogesh Rathi , Weidong Cai , Lauren J. O'Donnell
The relationship between brain connections and non-imaging phenotypes is increasingly studied using deep neural networks. However, the local and global properties of the brain's white matter networks are often overlooked in convolutional network design. We introduce TractGraphFormer, a hybrid Graph CNN-Transformer deep learning framework tailored for diffusion MRI tractography. This model leverages local anatomical characteristics and global feature dependencies of white matter structures. The Graph CNN module captures white matter geometry and grey matter connectivity to aggregate local features from anatomically similar white matter connections, while the Transformer module uses self-attention to enhance global information learning. Additionally, TractGraphFormer includes an attention module for interpreting predictive white matter connections. We apply TractGraphFormer to tasks of sex and age prediction. TractGraphFormer shows strong performance in large datasets of children (n = 9345) and young adults (n = 1065). Overall, our approach suggests that widespread connections in the WM are predictive of the sex and age of an individual. For each prediction task, consistent predictive anatomical tracts are identified across the two datasets. The proposed approach highlights the potential of integrating local anatomical information and global feature dependencies to improve prediction performance in machine learning with diffusion MRI tractography.
利用深度神经网络研究大脑连接与非成像表型之间关系的研究越来越多。然而,在卷积网络设计中,大脑白质网络的局部和全局特性往往被忽视。我们介绍了 TractGraphFormer,这是一种专为弥散核磁共振成像束成像定制的混合图形 CNN-Transformer 深度学习框架。该模型充分利用了白质结构的局部解剖特征和全局特征依赖性。Graph CNN 模块捕捉白质几何图形和灰质连接,从解剖学上相似的白质连接中汇总局部特征,而 Transformer 模块则利用自我关注来增强全局信息的学习。此外,TractGraphFormer 还包括一个用于解释预测性白质连接的注意力模块。我们将 TractGraphFormer 应用于性别和年龄预测任务。TractGraphFormer 在儿童(n = 9345)和年轻成人(n = 1065)的大型数据集中表现出强劲的性能。总体而言,我们的方法表明,WM 中的广泛连接可以预测一个人的性别和年龄。在每个预测任务中,两个数据集都发现了一致的预测解剖束。所提出的方法强调了整合局部解剖信息和全局特征依赖性的潜力,以提高机器学习中弥散核磁共振成像束成像的预测性能。
{"title":"TractGraphFormer: Anatomically informed hybrid graph CNN-transformer network for interpretable sex and age prediction from diffusion MRI tractography","authors":"Yuqian Chen ,&nbsp;Fan Zhang ,&nbsp;Meng Wang ,&nbsp;Leo R. Zekelman ,&nbsp;Suheyla Cetin-Karayumak ,&nbsp;Tengfei Xue ,&nbsp;Chaoyi Zhang ,&nbsp;Yang Song ,&nbsp;Jarrett Rushmore ,&nbsp;Nikos Makris ,&nbsp;Yogesh Rathi ,&nbsp;Weidong Cai ,&nbsp;Lauren J. O'Donnell","doi":"10.1016/j.media.2025.103476","DOIUrl":"10.1016/j.media.2025.103476","url":null,"abstract":"<div><div>The relationship between brain connections and non-imaging phenotypes is increasingly studied using deep neural networks. However, the local and global properties of the brain's white matter networks are often overlooked in convolutional network design. We introduce TractGraphFormer, a hybrid Graph CNN-Transformer deep learning framework tailored for diffusion MRI tractography. This model leverages local anatomical characteristics and global feature dependencies of white matter structures. The Graph CNN module captures white matter geometry and grey matter connectivity to aggregate local features from anatomically similar white matter connections, while the Transformer module uses self-attention to enhance global information learning. Additionally, TractGraphFormer includes an attention module for interpreting predictive white matter connections. We apply TractGraphFormer to tasks of sex and age prediction. TractGraphFormer shows strong performance in large datasets of children (<em>n</em> = 9345) and young adults (<em>n</em> = 1065). Overall, our approach suggests that widespread connections in the WM are predictive of the sex and age of an individual. For each prediction task, consistent predictive anatomical tracts are identified across the two datasets. The proposed approach highlights the potential of integrating local anatomical information and global feature dependencies to improve prediction performance in machine learning with diffusion MRI tractography.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103476"},"PeriodicalIF":10.7,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143035200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing lesion detection in automated breast ultrasound using unsupervised multi-view contrastive learning with 3D DETR
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-18 DOI: 10.1016/j.media.2025.103466
Xing Tao , Yan Cao , Yanhui Jiang , Xiaoxi Wu , Dan Yan , Wen Xue , Shulian Zhuang , Xin Yang , Ruobing Huang , Jianxing Zhang , Dong Ni
The inherent variability of lesions poses challenges in leveraging AI in 3D automated breast ultrasound (ABUS) for lesion detection. Traditional methods based on single scans have fallen short compared to comprehensive evaluations by experienced sonologists using multiple scans. To address this, our study introduces an innovative approach combining the multi-view co-attention mechanism (MCAM) with unsupervised contrastive learning. Rooted in the detection transformer (DETR) architecture, our model employs a one-to-many matching strategy, significantly boosting training efficiency and lesion recall metrics. The model integrates MCAM within the decoder, facilitating the interpretation of lesion data across diverse views. Simultaneously, unsupervised multi-view contrastive learning (UMCL) aligns features consistently across scans, improving detection performance. When tested on two multi-center datasets comprising 1509 patients, our approach outperforms existing state-of-the-art 3D detection models. Notably, our model achieves a 90.3% cancer detection rate with a false positive per image (FPPI) rate of 0.5 on the external validation dataset. This surpasses junior sonologists and matches the performance of seasoned experts.
{"title":"Enhancing lesion detection in automated breast ultrasound using unsupervised multi-view contrastive learning with 3D DETR","authors":"Xing Tao ,&nbsp;Yan Cao ,&nbsp;Yanhui Jiang ,&nbsp;Xiaoxi Wu ,&nbsp;Dan Yan ,&nbsp;Wen Xue ,&nbsp;Shulian Zhuang ,&nbsp;Xin Yang ,&nbsp;Ruobing Huang ,&nbsp;Jianxing Zhang ,&nbsp;Dong Ni","doi":"10.1016/j.media.2025.103466","DOIUrl":"10.1016/j.media.2025.103466","url":null,"abstract":"<div><div>The inherent variability of lesions poses challenges in leveraging AI in 3D automated breast ultrasound (ABUS) for lesion detection. Traditional methods based on single scans have fallen short compared to comprehensive evaluations by experienced sonologists using multiple scans. To address this, our study introduces an innovative approach combining the multi-view co-attention mechanism (MCAM) with unsupervised contrastive learning. Rooted in the detection transformer (DETR) architecture, our model employs a one-to-many matching strategy, significantly boosting training efficiency and lesion recall metrics. The model integrates MCAM within the decoder, facilitating the interpretation of lesion data across diverse views. Simultaneously, unsupervised multi-view contrastive learning (UMCL) aligns features consistently across scans, improving detection performance. When tested on two multi-center datasets comprising 1509 patients, our approach outperforms existing state-of-the-art 3D detection models. Notably, our model achieves a 90.3% cancer detection rate with a false positive per image (FPPI) rate of 0.5 on the external validation dataset. This surpasses junior sonologists and matches the performance of seasoned experts.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103466"},"PeriodicalIF":10.7,"publicationDate":"2025-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143035198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Benefit from public unlabeled data: A Frangi filter-based pretraining network for 3D cerebrovascular segmentation 受益于公共未标记数据:一种基于Frangi滤波器的三维脑血管分割预训练网络。
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-17 DOI: 10.1016/j.media.2024.103442
Gen Shi , Hao Lu , Hui Hui , Jie Tian
Precise cerebrovascular segmentation in Time-of-Flight Magnetic Resonance Angiography (TOF-MRA) data is crucial for computer-aided clinical diagnosis. The sparse distribution of cerebrovascular structures within TOF-MRA images often results in high costs for manual data labeling. Leveraging unlabeled TOF-MRA data can significantly enhance model performance. In this study, we have constructed the largest preprocessed unlabeled TOF-MRA dataset to date, comprising 1510 subjects. Additionally, we provide manually annotated segmentation masks for 113 subjects based on existing external image datasets to facilitate evaluation. We propose a simple yet effective pretraining strategy utilizing the Frangi filter, known for its capability to enhance vessel-like structures, to optimize the use of the unlabeled data for 3D cerebrovascular segmentation. This involves a Frangi filter-based preprocessing workflow tailored for large-scale unlabeled datasets and a multi-task pretraining strategy to efficiently utilize the preprocessed data. This approach ensures maximal extraction of useful knowledge from the unlabeled data. The efficacy of the pretrained model is assessed across four cerebrovascular segmentation datasets, where it demonstrates superior performance, improving the clDice metric by approximately 2%–3% compared to the latest semi- and self-supervised methods. Additionally, ablation studies validate the generalizability and effectiveness of our pretraining method across various backbone structures. The code and data have been open source at: https://github.com/shigen-StoneRoot/FFPN.
飞行时间磁共振血管成像(TOF-MRA)数据中精确的脑血管分割对计算机辅助临床诊断至关重要。脑血管结构在TOF-MRA图像中的稀疏分布往往导致人工数据标记成本高。利用未标记的TOF-MRA数据可以显著提高模型性能。在这项研究中,我们构建了迄今为止最大的预处理无标记TOF-MRA数据集,包括1510名受试者。此外,我们还基于现有的外部图像数据集为113个受试者提供了手动标注的分割掩码,以方便评估。我们提出了一种简单而有效的预训练策略,利用Frangi过滤器,以其增强血管样结构的能力而闻名,以优化未标记数据的3D脑血管分割的使用。这包括为大规模未标记数据集定制的基于Frangi滤波器的预处理工作流和多任务预训练策略,以有效利用预处理数据。这种方法确保从未标记的数据中最大限度地提取有用的知识。在四个脑血管分割数据集上对预训练模型的有效性进行了评估,在这些数据集上,它表现出了卓越的性能,与最新的半监督和自监督方法相比,clDice指标提高了约2%-3%。此外,消融研究验证了我们的预训练方法在各种骨干结构中的广泛性和有效性。代码和数据已在https://github.com/shigen-StoneRoot/FFPN上开放源代码。
{"title":"Benefit from public unlabeled data: A Frangi filter-based pretraining network for 3D cerebrovascular segmentation","authors":"Gen Shi ,&nbsp;Hao Lu ,&nbsp;Hui Hui ,&nbsp;Jie Tian","doi":"10.1016/j.media.2024.103442","DOIUrl":"10.1016/j.media.2024.103442","url":null,"abstract":"<div><div>Precise cerebrovascular segmentation in Time-of-Flight Magnetic Resonance Angiography (TOF-MRA) data is crucial for computer-aided clinical diagnosis. The sparse distribution of cerebrovascular structures within TOF-MRA images often results in high costs for manual data labeling. Leveraging unlabeled TOF-MRA data can significantly enhance model performance. In this study, we have constructed the largest preprocessed unlabeled TOF-MRA dataset to date, comprising 1510 subjects. Additionally, we provide manually annotated segmentation masks for 113 subjects based on existing external image datasets to facilitate evaluation. We propose a simple yet effective pretraining strategy utilizing the Frangi filter, known for its capability to enhance vessel-like structures, to optimize the use of the unlabeled data for 3D cerebrovascular segmentation. This involves a Frangi filter-based preprocessing workflow tailored for large-scale unlabeled datasets and a multi-task pretraining strategy to efficiently utilize the preprocessed data. This approach ensures maximal extraction of useful knowledge from the unlabeled data. The efficacy of the pretrained model is assessed across four cerebrovascular segmentation datasets, where it demonstrates superior performance, improving the clDice metric by approximately 2%–3% compared to the latest semi- and self-supervised methods. Additionally, ablation studies validate the generalizability and effectiveness of our pretraining method across various backbone structures. The code and data have been open source at: <span><span>https://github.com/shigen-StoneRoot/FFPN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103442"},"PeriodicalIF":10.7,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143008076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identifying multilayer network hub by graph representation learning
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-16 DOI: 10.1016/j.media.2025.103463
Defu Yang , Minjeong Kim , Yu Zhang , Guorong Wu
The recent advances in neuroimaging technology allow us to understand how the human brain is wired in vivo and how functional activity is synchronized across multiple regions. Growing evidence shows that the complexity of the functional connectivity is far beyond the widely used mono-layer network. Indeed, the hierarchical processing information among distinct brain regions and across multiple channels requires using a more advanced multilayer model to understand the synchronization across the brain that underlies functional brain networks. However, the principled approach for characterizing network organization in the context of multilayer topologies is largely unexplored. In this work, we present a novel multi-variate hub identification method that takes both the intra- and inter-layer network topologies into account. Specifically, we put the spotlight on the multilayer graph embeddings that allow us to separate connector hubs (connecting across network modules) with their peripheral nodes. The removal of these hub nodes breaks down the entire multilayer brain network into a set of disconnected communities. We have evaluated our novel multilayer hub identification method in task-based and resting-state functional images. Complimenting ongoing findings using mono-layer brain networks, our multilayer network analysis provides a new understanding of brain network topology that links functional connectivities with brain states and disease progression.
{"title":"Identifying multilayer network hub by graph representation learning","authors":"Defu Yang ,&nbsp;Minjeong Kim ,&nbsp;Yu Zhang ,&nbsp;Guorong Wu","doi":"10.1016/j.media.2025.103463","DOIUrl":"10.1016/j.media.2025.103463","url":null,"abstract":"<div><div>The recent advances in neuroimaging technology allow us to understand how the human brain is wired in vivo and how functional activity is synchronized across multiple regions. Growing evidence shows that the complexity of the functional connectivity is far beyond the widely used mono-layer network. Indeed, the hierarchical processing information among distinct brain regions and across multiple channels requires using a more advanced multilayer model to understand the synchronization across the brain that underlies functional brain networks. However, the principled approach for characterizing network organization in the context of multilayer topologies is largely unexplored. In this work, we present a novel multi-variate hub identification method that takes both the intra- and inter-layer network topologies into account. Specifically, we put the spotlight on the multilayer graph embeddings that allow us to separate connector hubs (connecting across network modules) with their peripheral nodes. The removal of these hub nodes breaks down the entire multilayer brain network into a set of disconnected communities. We have evaluated our novel multilayer hub identification method in task-based and resting-state functional images. Complimenting ongoing findings using mono-layer brain networks, our multilayer network analysis provides a new understanding of brain network topology that links functional connectivities with brain states and disease progression.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103463"},"PeriodicalIF":10.7,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143023985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Illuminating the unseen: Advancing MRI domain generalization through causality
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-16 DOI: 10.1016/j.media.2025.103459
Yunqi Wang , Tianjiao Zeng , Furui Liu , Qi Dou , Peng Cao , Hing-Chiu Chang , Qiao Deng , Edward S. Hui
Deep learning methods have shown promise in accelerated MRI reconstruction but face significant challenges under domain shifts between training and testing datasets, such as changes in image contrasts, anatomical regions, and acquisition strategies. To address these challenges, we present the first domain generalization framework specifically designed for accelerated MRI reconstruction to robustness across unseen domains. The framework employs progressive strategies to enforce domain invariance, starting with image-level fidelity consistency to ensure robust reconstruction quality across domains, and feature alignment to capture domain-invariant representations. Advancing beyond these foundations, we propose a novel approach enforcing mechanism-level invariance, termed GenCA-MRI, which aligns intrinsic causal relationships within MRI data. We further develop a computational strategy that significantly reduces the complexity of causal alignment, ensuring its feasibility for real-world applications. Extensive experiments validate the framework’s effectiveness, demonstrating both numerical and visual improvements over the baseline algorithm. GenCA-MRI presents the overall best performance, achieving a PSNR improvement up to 2.15 dB on fastMRI and 1.24 dB on IXI dataset at 8× acceleration, with superior performance in preserving anatomical details and mitigating domain-shift problem.
{"title":"Illuminating the unseen: Advancing MRI domain generalization through causality","authors":"Yunqi Wang ,&nbsp;Tianjiao Zeng ,&nbsp;Furui Liu ,&nbsp;Qi Dou ,&nbsp;Peng Cao ,&nbsp;Hing-Chiu Chang ,&nbsp;Qiao Deng ,&nbsp;Edward S. Hui","doi":"10.1016/j.media.2025.103459","DOIUrl":"10.1016/j.media.2025.103459","url":null,"abstract":"<div><div>Deep learning methods have shown promise in accelerated MRI reconstruction but face significant challenges under domain shifts between training and testing datasets, such as changes in image contrasts, anatomical regions, and acquisition strategies. To address these challenges, we present the first domain generalization framework specifically designed for accelerated MRI reconstruction to robustness across unseen domains. The framework employs progressive strategies to enforce domain invariance, starting with image-level fidelity consistency to ensure robust reconstruction quality across domains, and feature alignment to capture domain-invariant representations. Advancing beyond these foundations, we propose a novel approach enforcing mechanism-level invariance, termed GenCA-MRI, which aligns intrinsic causal relationships within MRI data. We further develop a computational strategy that significantly reduces the complexity of causal alignment, ensuring its feasibility for real-world applications. Extensive experiments validate the framework’s effectiveness, demonstrating both numerical and visual improvements over the baseline algorithm. GenCA-MRI presents the overall best performance, achieving a PSNR improvement up to 2.15 dB on fastMRI and 1.24 dB on IXI dataset at 8<span><math><mo>×</mo></math></span> acceleration, with superior performance in preserving anatomical details and mitigating domain-shift problem.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103459"},"PeriodicalIF":10.7,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143395482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VSNet: Vessel Structure-aware Network for hepatic and portal vein segmentation
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-16 DOI: 10.1016/j.media.2025.103458
Jichen Xu , Anqi Dong , Yang Yang , Shuo Jin , Jianping Zeng , Zhengqing Xu , Wei Jiang , Liang Zhang , Jiahong Dong , Bo Wang
Identifying and segmenting hepatic and portal veins (two predominant vascular systems in the liver, from CT scans) play a crucial role for clinicians in preoperative planning for treatment strategies. However, existing segmentation models often struggle to capture fine details of minor veins. In this article, we introduce Vessel Structure-aware Network (VSNet), a multi-task learning model with vessel-growing decoder, to address the challenge. VSNet excels at accurate segmentation by capturing the topological features of both minor veins while preserving correct connectivity from minor vessels to trucks. We also build and publish the largest dataset (303 cases) for hepatic and portal vessel segmentation. Through comprehensive experiments, we demonstrate that VSNet achieves the best Dice for hepatic vein of 0.824 and portal vein of 0.807 on our proposed dataset, significantly outperforming other popular segmentation models. The source code and dataset are publicly available at https://github.com/XXYZB/VSNet.
{"title":"VSNet: Vessel Structure-aware Network for hepatic and portal vein segmentation","authors":"Jichen Xu ,&nbsp;Anqi Dong ,&nbsp;Yang Yang ,&nbsp;Shuo Jin ,&nbsp;Jianping Zeng ,&nbsp;Zhengqing Xu ,&nbsp;Wei Jiang ,&nbsp;Liang Zhang ,&nbsp;Jiahong Dong ,&nbsp;Bo Wang","doi":"10.1016/j.media.2025.103458","DOIUrl":"10.1016/j.media.2025.103458","url":null,"abstract":"<div><div>Identifying and segmenting hepatic and portal veins (two predominant vascular systems in the liver, from CT scans) play a crucial role for clinicians in preoperative planning for treatment strategies. However, existing segmentation models often struggle to capture fine details of minor veins. In this article, we introduce Vessel Structure-aware Network (VSNet), a multi-task learning model with vessel-growing decoder, to address the challenge. VSNet excels at accurate segmentation by capturing the topological features of both minor veins while preserving correct connectivity from minor vessels to trucks. We also build and publish the largest dataset (303 cases) for hepatic and portal vessel segmentation. Through comprehensive experiments, we demonstrate that VSNet achieves the best Dice for hepatic vein of 0.824 and portal vein of 0.807 on our proposed dataset, significantly outperforming other popular segmentation models. The source code and dataset are publicly available at <span><span>https://github.com/XXYZB/VSNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103458"},"PeriodicalIF":10.7,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143101596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UnICLAM: Contrastive representation learning with adversarial masking for unified and interpretable Medical Vision Question Answering
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-15 DOI: 10.1016/j.media.2025.103464
Chenlu Zhan , Peng Peng , Hongwei Wang , Gaoang Wang , Yu Lin , Tao Chen , Hongsen Wang
Medical Visual Question Answering aims to assist doctors in decision-making when answering clinical questions regarding radiology images. Nevertheless, current models learn cross-modal representations through residing vision and text encoders in dual separate spaces, which inevitably leads to indirect semantic alignment. In this paper, we propose UnICLAM, a Unified and Interpretable Medical-VQA model through Contrastive Representation Learning with Adversarial Masking. To achieve the learning of an aligned image–text representation, we first establish a unified dual-stream pre-training structure with the gradually soft-parameter sharing strategy for alignment. Specifically, the proposed strategy learns a constraint for the vision and text encoders to be close in the same space, which is gradually loosened as the number of layers increases, so as to narrow the distance between the two different modalities. For grasping the unified semantic cross-modal representation, we extend the adversarial masking data augmentation to the contrastive representation learning of vision and text in a unified manner. While the encoder training minimizes the distance between the original and masking samples, the adversarial masking module keeps adversarial learning to conversely maximize the distance. We also intuitively take a further exploration of the unified adversarial masking augmentation method, which improves the potential ante-hoc interpretability with remarkable performance and efficiency. Experimental results on VQA-RAD and SLAKE benchmarks demonstrate that UnICLAM outperforms existing 11 state-of-the-art Medical-VQA methods. More importantly, we make an additional discussion about the performance of UnICLAM in diagnosing heart failure, verifying that UnICLAM exhibits superior few-shot adaption performance in practical disease diagnosis. The codes and models will be released upon the acceptance of the paper.
{"title":"UnICLAM: Contrastive representation learning with adversarial masking for unified and interpretable Medical Vision Question Answering","authors":"Chenlu Zhan ,&nbsp;Peng Peng ,&nbsp;Hongwei Wang ,&nbsp;Gaoang Wang ,&nbsp;Yu Lin ,&nbsp;Tao Chen ,&nbsp;Hongsen Wang","doi":"10.1016/j.media.2025.103464","DOIUrl":"10.1016/j.media.2025.103464","url":null,"abstract":"<div><div>Medical Visual Question Answering aims to assist doctors in decision-making when answering clinical questions regarding radiology images. Nevertheless, current models learn cross-modal representations through residing vision and text encoders in dual separate spaces, which inevitably leads to indirect semantic alignment. In this paper, we propose UnICLAM, a Unified and Interpretable Medical-VQA model through Contrastive Representation Learning with Adversarial Masking. To achieve the learning of an aligned image–text representation, we first establish a unified dual-stream pre-training structure with the gradually soft-parameter sharing strategy for alignment. Specifically, the proposed strategy learns a constraint for the vision and text encoders to be close in the same space, which is gradually loosened as the number of layers increases, so as to narrow the distance between the two different modalities. For grasping the unified semantic cross-modal representation, we extend the adversarial masking data augmentation to the contrastive representation learning of vision and text in a unified manner. While the encoder training minimizes the distance between the original and masking samples, the adversarial masking module keeps adversarial learning to conversely maximize the distance. We also intuitively take a further exploration of the unified adversarial masking augmentation method, which improves the potential <em>ante-hoc</em> interpretability with remarkable performance and efficiency. Experimental results on VQA-RAD and SLAKE benchmarks demonstrate that UnICLAM outperforms existing 11 state-of-the-art Medical-VQA methods. More importantly, we make an additional discussion about the performance of UnICLAM in diagnosing heart failure, verifying that UnICLAM exhibits superior few-shot adaption performance in practical disease diagnosis. The codes and models will be released upon the acceptance of the paper.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103464"},"PeriodicalIF":10.7,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143029158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SIRE: Scale-invariant, rotation-equivariant estimation of artery orientations using graph neural networks
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-15 DOI: 10.1016/j.media.2025.103467
Dieuwertje Alblas , Julian Suk , Christoph Brune , Kak Khee Yeung , Jelmer M. Wolterink
The orientation of a blood vessel as visualized in 3D medical images is an important descriptor of its geometry that can be used for centerline extraction and subsequent segmentation, labeling, and visualization. Blood vessels appear at multiple scales and levels of tortuosity, and determining the exact orientation of a vessel is a challenging problem. Recent works have used 3D convolutional neural networks (CNNs) for this purpose, but CNNs are sensitive to variations in vessel size and orientation. We present SIRE: a scale-invariant rotation-equivariant estimator for local vessel orientation. SIRE is modular and has strongly generalizing properties due to symmetry preservations.
SIRE consists of a gauge equivariant mesh CNN (GEM-CNN) that operates in parallel on multiple nested spherical meshes with different sizes. The features on each mesh are a projection of image intensities within the corresponding sphere. These features are intrinsic to the sphere and, in combination with the gauge equivariant properties of GEM-CNN, lead to SO(3) rotation equivariance. Approximate scale invariance is achieved by weight sharing and use of a symmetric maximum aggregation function to combine predictions at multiple scales. Hence, SIRE can be trained with arbitrarily oriented vessels with varying radii to generalize to vessels with a wide range of calibres and tortuosity.
We demonstrate the efficacy of SIRE using three datasets containing vessels of varying scales; the vascular model repository (VMR), the ASOCA coronary artery set, and an in-house set of abdominal aortic aneurysms (AAAs). We embed SIRE in a centerline tracker which accurately tracks large calibre AAAs, regardless of the data SIRE is trained with. Moreover, a tracker can use SIRE to track small-calibre tortuous coronary arteries, even when trained only with large-calibre, non-tortuous AAAs. Additional experiments are performed to verify the rotational equivariant and scale invariant properties of SIRE.
In conclusion, by incorporating SO(3) and scale symmetries, SIRE can be used to determine orientations of vessels outside of the training domain, offering a robust and data-efficient solution to geometric analysis of blood vessels in 3D medical images.
{"title":"SIRE: Scale-invariant, rotation-equivariant estimation of artery orientations using graph neural networks","authors":"Dieuwertje Alblas ,&nbsp;Julian Suk ,&nbsp;Christoph Brune ,&nbsp;Kak Khee Yeung ,&nbsp;Jelmer M. Wolterink","doi":"10.1016/j.media.2025.103467","DOIUrl":"10.1016/j.media.2025.103467","url":null,"abstract":"<div><div>The orientation of a blood vessel as visualized in 3D medical images is an important descriptor of its geometry that can be used for centerline extraction and subsequent segmentation, labeling, and visualization. Blood vessels appear at multiple scales and levels of tortuosity, and determining the exact orientation of a vessel is a challenging problem. Recent works have used 3D convolutional neural networks (CNNs) for this purpose, but CNNs are sensitive to variations in vessel size and orientation. We present SIRE: a scale-invariant rotation-equivariant estimator for local vessel orientation. SIRE is modular and has strongly generalizing properties due to symmetry preservations.</div><div>SIRE consists of a gauge equivariant mesh CNN (GEM-CNN) that operates in parallel on multiple nested spherical meshes with different sizes. The features on each mesh are a projection of image intensities within the corresponding sphere. These features are intrinsic to the sphere and, in combination with the gauge equivariant properties of GEM-CNN, lead to SO(3) rotation equivariance. Approximate scale invariance is achieved by weight sharing and use of a symmetric maximum aggregation function to combine predictions at multiple scales. Hence, SIRE can be trained with arbitrarily oriented vessels with varying radii to generalize to vessels with a wide range of calibres and tortuosity.</div><div>We demonstrate the efficacy of SIRE using three datasets containing vessels of varying scales; the vascular model repository (VMR), the ASOCA coronary artery set, and an in-house set of abdominal aortic aneurysms (AAAs). We embed SIRE in a centerline tracker which accurately tracks large calibre AAAs, regardless of the data SIRE is trained with. Moreover, a tracker can use SIRE to track small-calibre tortuous coronary arteries, even when trained only with large-calibre, non-tortuous AAAs. Additional experiments are performed to verify the rotational equivariant and scale invariant properties of SIRE.</div><div>In conclusion, by incorporating SO(3) and scale symmetries, SIRE can be used to determine orientations of vessels outside of the training domain, offering a robust and data-efficient solution to geometric analysis of blood vessels in 3D medical images.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"Article 103467"},"PeriodicalIF":10.7,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143024001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Medical image analysis
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1