首页 > 最新文献

Medical image analysis最新文献

英文 中文
Cross-center Model Adaptive Tooth segmentation. 交叉中心模型自适应牙齿分割。
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-27 DOI: 10.1016/j.media.2024.103443
Ruizhe Chen, Jianfei Yang, Huimin Xiong, Ruiling Xu, Yang Feng, Jian Wu, Zuozhu Liu

Automatic 3-dimensional tooth segmentation on intraoral scans (IOS) plays a pivotal role in computer-aided orthodontic treatments. In practice, deploying existing well-trained models to different medical centers suffers from two main problems: (1) the data distribution shifts between existing and new centers, which causes significant performance degradation. (2) The data in the existing center(s) is usually not permitted to be shared, and annotating additional data in the new center(s) is time-consuming and expensive, thus making re-training or fine-tuning unfeasible. In this paper, we propose a framework for Cross-center Model Adaptive Tooth segmentation (CMAT) to alleviate these issues. CMAT takes the trained model(s) from the source center(s) as input and adapts them to different target centers, without data transmission or additional annotations. CMAT is applicable to three cross-center scenarios: source-data-free, multi-source-data-free, and test-time. The model adaptation in CMAT is realized by a tooth-level prototype alignment module, a progressive pseudo-labeling transfer module, and a tooth-prior regularized information maximization module. Experiments under three cross-center scenarios on two datasets show that CMAT can consistently surpass existing baselines. The effectiveness is further verified with extensive ablation studies and statistical analysis, demonstrating its applicability for privacy-preserving model adaptive tooth segmentation in real-world digital dentistry.

口腔内扫描自动三维牙齿分割(IOS)在计算机辅助正畸治疗中起着关键作用。在实践中,将现有的训练有素的模型部署到不同的医疗中心面临两个主要问题:(1)数据分布在现有中心和新中心之间发生转移,导致性能显著下降。(2)现有中心的数据通常不允许共享,并且在新中心注释额外的数据既耗时又昂贵,因此无法进行重新培训或微调。在本文中,我们提出了一个跨中心模型自适应牙齿分割(CMAT)框架来缓解这些问题。CMAT将源中心的训练模型作为输入,并使其适应不同的目标中心,不需要数据传输或额外的注释。CMAT适用于三种跨中心场景:无源数据、多源数据和测试时间。CMAT中的模型自适应由齿级原型对准模块、渐进式伪标记传递模块和齿级先验正则化信息最大化模块实现。在两个数据集上三种跨中心场景下的实验表明,CMAT可以持续超越现有基线。广泛的消融研究和统计分析进一步验证了其有效性,证明了其在现实世界数字牙科中隐私保护模型自适应牙齿分割的适用性。
{"title":"Cross-center Model Adaptive Tooth segmentation.","authors":"Ruizhe Chen, Jianfei Yang, Huimin Xiong, Ruiling Xu, Yang Feng, Jian Wu, Zuozhu Liu","doi":"10.1016/j.media.2024.103443","DOIUrl":"https://doi.org/10.1016/j.media.2024.103443","url":null,"abstract":"<p><p>Automatic 3-dimensional tooth segmentation on intraoral scans (IOS) plays a pivotal role in computer-aided orthodontic treatments. In practice, deploying existing well-trained models to different medical centers suffers from two main problems: (1) the data distribution shifts between existing and new centers, which causes significant performance degradation. (2) The data in the existing center(s) is usually not permitted to be shared, and annotating additional data in the new center(s) is time-consuming and expensive, thus making re-training or fine-tuning unfeasible. In this paper, we propose a framework for Cross-center Model Adaptive Tooth segmentation (CMAT) to alleviate these issues. CMAT takes the trained model(s) from the source center(s) as input and adapts them to different target centers, without data transmission or additional annotations. CMAT is applicable to three cross-center scenarios: source-data-free, multi-source-data-free, and test-time. The model adaptation in CMAT is realized by a tooth-level prototype alignment module, a progressive pseudo-labeling transfer module, and a tooth-prior regularized information maximization module. Experiments under three cross-center scenarios on two datasets show that CMAT can consistently surpass existing baselines. The effectiveness is further verified with extensive ablation studies and statistical analysis, demonstrating its applicability for privacy-preserving model adaptive tooth segmentation in real-world digital dentistry.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103443"},"PeriodicalIF":10.7,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142950923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A lightweight generative model for interpretable subject-level prediction. 用于可解释的主题级预测的轻量级生成模型。
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-27 DOI: 10.1016/j.media.2024.103436
Chiara Mauri, Stefano Cerri, Oula Puonti, Mark Mühlau, Koen Van Leemput

Recent years have seen a growing interest in methods for predicting an unknown variable of interest, such as a subject's diagnosis, from medical images depicting its anatomical-functional effects. Methods based on discriminative modeling excel at making accurate predictions, but are challenged in their ability to explain their decisions in anatomically meaningful terms. In this paper, we propose a simple technique for single-subject prediction that is inherently interpretable. It augments the generative models used in classical human brain mapping techniques, in which the underlying cause-effect relations can be encoded, with a multivariate noise model that captures dominant spatial correlations. Experiments demonstrate that the resulting model can be efficiently inverted to make accurate subject-level predictions, while at the same time offering intuitive visual explanations of its inner workings. The method is easy to use: training is fast for typical training set sizes, and only a single hyperparameter needs to be set by the user. Our code is available at https://github.com/chiara-mauri/Interpretable-subject-level-prediction.

近年来,人们对预测未知变量的方法越来越感兴趣,比如从描绘其解剖功能影响的医学图像中预测受试者的诊断。基于判别模型的方法在做出准确预测方面表现出色,但在用解剖学意义的术语解释其决策的能力方面受到挑战。在本文中,我们提出了一种简单的单主题预测技术,它本身是可解释的。它增强了在经典的人类大脑映射技术中使用的生成模型,在生成模型中,潜在的因果关系可以被编码,并使用捕获主要空间相关性的多变量噪声模型。实验表明,所得到的模型可以有效地反转,以做出准确的学科级预测,同时对其内部工作原理提供直观的视觉解释。该方法易于使用:对于典型的训练集大小,训练速度很快,并且只需要用户设置一个超参数。我们的代码可在https://github.com/chiara-mauri/Interpretable-subject-level-prediction上获得。
{"title":"A lightweight generative model for interpretable subject-level prediction.","authors":"Chiara Mauri, Stefano Cerri, Oula Puonti, Mark Mühlau, Koen Van Leemput","doi":"10.1016/j.media.2024.103436","DOIUrl":"https://doi.org/10.1016/j.media.2024.103436","url":null,"abstract":"<p><p>Recent years have seen a growing interest in methods for predicting an unknown variable of interest, such as a subject's diagnosis, from medical images depicting its anatomical-functional effects. Methods based on discriminative modeling excel at making accurate predictions, but are challenged in their ability to explain their decisions in anatomically meaningful terms. In this paper, we propose a simple technique for single-subject prediction that is inherently interpretable. It augments the generative models used in classical human brain mapping techniques, in which the underlying cause-effect relations can be encoded, with a multivariate noise model that captures dominant spatial correlations. Experiments demonstrate that the resulting model can be efficiently inverted to make accurate subject-level predictions, while at the same time offering intuitive visual explanations of its inner workings. The method is easy to use: training is fast for typical training set sizes, and only a single hyperparameter needs to be set by the user. Our code is available at https://github.com/chiara-mauri/Interpretable-subject-level-prediction.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103436"},"PeriodicalIF":10.7,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142965948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learnable color space conversion and fusion for stain normalization in pathology images. 病理图像染色归一化的可学习色彩空间转换与融合。
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-24 DOI: 10.1016/j.media.2024.103424
Jing Ke, Yijin Zhou, Yiqing Shen, Yi Guo, Ning Liu, Xiaodan Han, Dinggang Shen

Variations in hue and contrast are common in H&E-stained pathology images due to differences in slide preparation across various institutions. Such stain variations, while not affecting pathologists much in diagnosing the biopsy, pose significant challenges for computer-assisted diagnostic systems, leading to potential underdiagnosis or misdiagnosis, especially when stain differentiation introduces substantial heterogeneity across datasets from different sources. Traditional stain normalization methods, aimed at mitigating these issues, often require labor-intensive selection of appropriate templates, limiting their practicality and automation. Innovatively, we propose a Learnable Stain Normalization layer, i.e. LStainNorm, designed as an easily integrable component for pathology image analysis. It minimizes the need for manual template selection by autonomously learning the optimal stain characteristics. Moreover, the learned optimal stain template provides the interpretability to enhance the understanding of the normalization process. Additionally, we demonstrate that fusing pathology images normalized in multiple color spaces can improve performance. Therefore, we extend LStainNorm with a novel self-attention mechanism to facilitate the fusion of features across different attributes and color spaces. Experimentally, LStainNorm outperforms the state-of-the-art methods including conventional ones and GANs on two classification datasets and three nuclei segmentation datasets by an average increase of 4.78% in accuracy, 3.53% in Dice coefficient, and 6.59% in IoU. Additionally, by enabling an end-to-end training and inference process, LStainNorm eliminates the need for intermediate steps between normalization and analysis, resulting in more efficient use of hardware resources and significantly faster inference time, i.e up to hundreds of times quicker than traditional methods. The code is publicly available at https://github.com/yjzscode/Optimal-Normalisation-in-Color-Spaces.

由于不同机构的载玻片制备方法不同,在h&e染色病理图像中,色调和对比度的变化是常见的。这种染色差异虽然对病理学家的活检诊断没有太大影响,但对计算机辅助诊断系统构成了重大挑战,导致潜在的诊断不足或误诊,特别是当染色分化在不同来源的数据集中引入实质性的异质性时。传统的染色归一化方法,旨在减轻这些问题,往往需要劳动密集型的选择适当的模板,限制了其实用性和自动化。创新地,我们提出了一个可学习的染色归一化层,即LStainNorm,被设计为一个易于集成的病理图像分析组件。它通过自主学习最佳染色特征,最大限度地减少了手动模板选择的需要。此外,学习到的最优染色模板提供了可解释性,以增强对规范化过程的理解。此外,我们证明融合病理图像归一化在多个色彩空间可以提高性能。因此,我们对LStainNorm进行了扩展,引入了一种新的自注意机制,以促进不同属性和色彩空间的特征融合。实验结果表明,LStainNorm在2个分类数据集和3个核分割数据集上的准确率平均提高4.78%,Dice系数平均提高3.53%,IoU平均提高6.59%,优于传统方法和gan。此外,通过支持端到端训练和推理过程,LStainNorm消除了在规范化和分析之间的中间步骤的需要,从而更有效地利用硬件资源和显着更快的推理时间,即比传统方法快数百倍。该代码可在https://github.com/yjzscode/Optimal-Normalisation-in-Color-Spaces上公开获得。
{"title":"Learnable color space conversion and fusion for stain normalization in pathology images.","authors":"Jing Ke, Yijin Zhou, Yiqing Shen, Yi Guo, Ning Liu, Xiaodan Han, Dinggang Shen","doi":"10.1016/j.media.2024.103424","DOIUrl":"https://doi.org/10.1016/j.media.2024.103424","url":null,"abstract":"<p><p>Variations in hue and contrast are common in H&E-stained pathology images due to differences in slide preparation across various institutions. Such stain variations, while not affecting pathologists much in diagnosing the biopsy, pose significant challenges for computer-assisted diagnostic systems, leading to potential underdiagnosis or misdiagnosis, especially when stain differentiation introduces substantial heterogeneity across datasets from different sources. Traditional stain normalization methods, aimed at mitigating these issues, often require labor-intensive selection of appropriate templates, limiting their practicality and automation. Innovatively, we propose a Learnable Stain Normalization layer, i.e. LStainNorm, designed as an easily integrable component for pathology image analysis. It minimizes the need for manual template selection by autonomously learning the optimal stain characteristics. Moreover, the learned optimal stain template provides the interpretability to enhance the understanding of the normalization process. Additionally, we demonstrate that fusing pathology images normalized in multiple color spaces can improve performance. Therefore, we extend LStainNorm with a novel self-attention mechanism to facilitate the fusion of features across different attributes and color spaces. Experimentally, LStainNorm outperforms the state-of-the-art methods including conventional ones and GANs on two classification datasets and three nuclei segmentation datasets by an average increase of 4.78% in accuracy, 3.53% in Dice coefficient, and 6.59% in IoU. Additionally, by enabling an end-to-end training and inference process, LStainNorm eliminates the need for intermediate steps between normalization and analysis, resulting in more efficient use of hardware resources and significantly faster inference time, i.e up to hundreds of times quicker than traditional methods. The code is publicly available at https://github.com/yjzscode/Optimal-Normalisation-in-Color-Spaces.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103424"},"PeriodicalIF":10.7,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142910006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SurgiTrack: Fine-grained multi-class multi-tool tracking in surgical videos. SurgiTrack:在手术视频中进行细粒度多类多工具跟踪。
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-17 DOI: 10.1016/j.media.2024.103438
Chinedu Innocent Nwoye, Nicolas Padoy

Accurate tool tracking is essential for the success of computer-assisted intervention. Previous efforts often modeled tool trajectories rigidly, overlooking the dynamic nature of surgical procedures, especially tracking scenarios like out-of-body and out-of-camera views. Addressing this limitation, the new CholecTrack20 dataset provides detailed labels that account for multiple tool trajectories in three perspectives: (1) intraoperative, (2) intracorporeal, and (3) visibility, representing the different types of temporal duration of tool tracks. These fine-grained labels enhance tracking flexibility but also increase the task complexity. Re-identifying tools after occlusion or re-insertion into the body remains challenging due to high visual similarity, especially among tools of the same category. This work recognizes the critical role of the tool operators in distinguishing tool track instances, especially those belonging to the same tool category. The operators' information are however not explicitly captured in surgical videos. We therefore propose SurgiTrack, a novel deep learning method that leverages YOLOv7 for precise tool detection and employs an attention mechanism to model the originating direction of the tools, as a proxy to their operators, for tool re-identification. To handle diverse tool trajectory perspectives, SurgiTrack employs a harmonizing bipartite matching graph, minimizing conflicts and ensuring accurate tool identity association. Experimental results on CholecTrack20 demonstrate SurgiTrack's effectiveness, outperforming baselines and state-of-the-art methods with real-time inference capability. This work sets a new standard in surgical tool tracking, providing dynamic trajectories for more adaptable and precise assistance in minimally invasive surgeries.

准确的工具跟踪对于计算机辅助干预的成功至关重要。以前的工作通常是严格地建模工具轨迹,忽略了手术过程的动态性,特别是跟踪像体外和镜头外视图这样的场景。为了解决这一限制,新的CholecTrack20数据集提供了详细的标签,从三个角度解释了多个工具轨迹:(1)术中,(2)体内,(3)可见性,代表了工具轨迹的不同类型的时间持续时间。这些细粒度标签增强了跟踪的灵活性,但也增加了任务的复杂性。由于高度的视觉相似性,特别是在同一类别的工具之间,在闭塞或重新插入体内后重新识别工具仍然具有挑战性。这项工作认识到工具操作员在区分工具轨迹实例中的关键作用,特别是那些属于同一工具类别的实例。然而,手术视频中并没有明确地捕捉到手术者的信息。因此,我们提出了SurgiTrack,这是一种新颖的深度学习方法,利用YOLOv7进行精确的工具检测,并采用注意力机制对工具的原始方向进行建模,作为其操作员的代理,以重新识别工具。为了处理不同的刀具轨迹视角,SurgiTrack采用协调的二部匹配图,最大限度地减少冲突,并确保准确的刀具身份关联。在CholecTrack20上的实验结果证明了SurgiTrack的有效性,具有实时推理能力,优于基线和最先进的方法。这项工作为外科手术工具跟踪设定了新的标准,为微创手术提供了更具适应性和更精确的辅助动态轨迹。
{"title":"SurgiTrack: Fine-grained multi-class multi-tool tracking in surgical videos.","authors":"Chinedu Innocent Nwoye, Nicolas Padoy","doi":"10.1016/j.media.2024.103438","DOIUrl":"https://doi.org/10.1016/j.media.2024.103438","url":null,"abstract":"<p><p>Accurate tool tracking is essential for the success of computer-assisted intervention. Previous efforts often modeled tool trajectories rigidly, overlooking the dynamic nature of surgical procedures, especially tracking scenarios like out-of-body and out-of-camera views. Addressing this limitation, the new CholecTrack20 dataset provides detailed labels that account for multiple tool trajectories in three perspectives: (1) intraoperative, (2) intracorporeal, and (3) visibility, representing the different types of temporal duration of tool tracks. These fine-grained labels enhance tracking flexibility but also increase the task complexity. Re-identifying tools after occlusion or re-insertion into the body remains challenging due to high visual similarity, especially among tools of the same category. This work recognizes the critical role of the tool operators in distinguishing tool track instances, especially those belonging to the same tool category. The operators' information are however not explicitly captured in surgical videos. We therefore propose SurgiTrack, a novel deep learning method that leverages YOLOv7 for precise tool detection and employs an attention mechanism to model the originating direction of the tools, as a proxy to their operators, for tool re-identification. To handle diverse tool trajectory perspectives, SurgiTrack employs a harmonizing bipartite matching graph, minimizing conflicts and ensuring accurate tool identity association. Experimental results on CholecTrack20 demonstrate SurgiTrack's effectiveness, outperforming baselines and state-of-the-art methods with real-time inference capability. This work sets a new standard in surgical tool tracking, providing dynamic trajectories for more adaptable and precise assistance in minimally invasive surgeries.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103438"},"PeriodicalIF":10.7,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142872471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Personalized dental crown design: A point-to-mesh completion network. 个性化牙冠设计:点对网补全网络。
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-17 DOI: 10.1016/j.media.2024.103439
Golriz Hosseinimanesh, Ammar Alsheghri, Julia Keren, Farida Cheriet, Francois Guibault

Designing dental crowns with computer-aided design software in dental laboratories is complex and time-consuming. Using real clinical datasets, we developed an end-to-end deep learning model that automatically generates personalized dental crown meshes. The input context includes the prepared tooth, its adjacent teeth, and the two closest teeth in the opposing jaw. The training set contains this context, the ground truth crown, and the extracted margin line. Our model consists of two components: First, a feature extractor converts the input point cloud into a set of local feature vectors, which are then fed into a transformer-based model to predict the geometric features of the crown. Second, a point-to-mesh module generates a dense array of points with normal vectors, and a differentiable Poisson surface reconstruction method produces an accurate crown mesh. Training is conducted with three losses: (1) a customized margin line loss; (2) a contrastive-based Chamfer distance loss; and (3) a mean square error (MSE) loss to control mesh quality. We compare our method with our previously published method, Dental Mesh Completion (DMC). Extensive testing confirms our method's superiority, achieving a 12.32% reduction in Chamfer distance and a 46.43% reduction in MSE compared to DMC. Margin line loss improves Chamfer distance by 5.59%.

在牙科实验室用计算机辅助设计软件设计牙冠是一项复杂而耗时的工作。使用真实的临床数据集,我们开发了一个端到端深度学习模型,可以自动生成个性化的牙冠网格。所述输入上下文包括所述准备好的牙齿、其相邻的牙齿和所述相对颌中最近的两个牙齿。训练集包含这个上下文、地面真值冠和提取的边缘线。我们的模型由两个部分组成:首先,特征提取器将输入的点云转换成一组局部特征向量,然后将其馈送到基于变压器的模型中以预测冠的几何特征。其次,点到网格模块生成具有法向量的密集点阵列,可微泊松曲面重建方法生成精确的冠网格。培训以三种损失进行:(1)定制保证金线损失;(2)基于对比度的倒角距离损失;(3)均方误差(MSE)损失来控制网格质量。我们比较了我们的方法与我们之前发表的方法,牙网补全(DMC)。广泛的测试证实了我们的方法的优越性,与DMC相比,Chamfer距离减少了12.32%,MSE减少了46.43%。边缘线损耗使倒角距离提高5.59%。
{"title":"Personalized dental crown design: A point-to-mesh completion network.","authors":"Golriz Hosseinimanesh, Ammar Alsheghri, Julia Keren, Farida Cheriet, Francois Guibault","doi":"10.1016/j.media.2024.103439","DOIUrl":"https://doi.org/10.1016/j.media.2024.103439","url":null,"abstract":"<p><p>Designing dental crowns with computer-aided design software in dental laboratories is complex and time-consuming. Using real clinical datasets, we developed an end-to-end deep learning model that automatically generates personalized dental crown meshes. The input context includes the prepared tooth, its adjacent teeth, and the two closest teeth in the opposing jaw. The training set contains this context, the ground truth crown, and the extracted margin line. Our model consists of two components: First, a feature extractor converts the input point cloud into a set of local feature vectors, which are then fed into a transformer-based model to predict the geometric features of the crown. Second, a point-to-mesh module generates a dense array of points with normal vectors, and a differentiable Poisson surface reconstruction method produces an accurate crown mesh. Training is conducted with three losses: (1) a customized margin line loss; (2) a contrastive-based Chamfer distance loss; and (3) a mean square error (MSE) loss to control mesh quality. We compare our method with our previously published method, Dental Mesh Completion (DMC). Extensive testing confirms our method's superiority, achieving a 12.32% reduction in Chamfer distance and a 46.43% reduction in MSE compared to DMC. Margin line loss improves Chamfer distance by 5.59%.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103439"},"PeriodicalIF":10.7,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142872391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Brain networks and intelligence: A graph neural network based approach to resting state fMRI data. 脑网络与智能:基于图神经网络的静息状态fMRI数据处理方法。
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-16 DOI: 10.1016/j.media.2024.103433
Bishal Thapaliya, Esra Akbas, Jiayu Chen, Ram Sapkota, Bhaskar Ray, Pranav Suresh, Vince D Calhoun, Jingyu Liu

Resting-state functional magnetic resonance imaging (rsfMRI) is a powerful tool for investigating the relationship between brain function and cognitive processes as it allows for the functional organization of the brain to be captured without relying on a specific task or stimuli. In this paper, we present a novel modeling architecture called BrainRGIN for predicting intelligence (fluid, crystallized and total intelligence) using graph neural networks on rsfMRI derived static functional network connectivity matrices. Extending from the existing graph convolution networks, our approach incorporates a clustering-based embedding and graph isomorphism network in the graph convolutional layer to reflect the nature of the brain sub-network organization and efficient network expression, in combination with TopK pooling and attention-based readout functions. We evaluated our proposed architecture on a large dataset, specifically the Adolescent Brain Cognitive Development Dataset, and demonstrated its effectiveness in predicting individual differences in intelligence. Our model achieved lower mean squared errors and higher correlation scores than existing relevant graph architectures and other traditional machine learning models for all of the intelligence prediction tasks. The middle frontal gyrus exhibited a significant contribution to both fluid and crystallized intelligence, suggesting their pivotal role in these cognitive processes. Total composite scores identified a diverse set of brain regions to be relevant which underscores the complex nature of total intelligence. Our GitHub implementation is publicly available on https://github.com/bishalth01/BrainRGIN/.

静息状态功能磁共振成像(rsfMRI)是研究大脑功能和认知过程之间关系的有力工具,因为它允许在不依赖于特定任务或刺激的情况下捕获大脑的功能组织。在本文中,我们提出了一种名为BrainRGIN的新型建模架构,用于在rsfMRI衍生的静态功能网络连接矩阵上使用图神经网络预测智能(流体、结晶和总智能)。在现有的图卷积网络的基础上,我们的方法结合TopK池和基于注意力的读出功能,在图卷积层中引入了基于聚类的嵌入和图同构网络,以反映大脑子网络组织的性质和高效的网络表达。我们在一个大型数据集(特别是青少年大脑认知发展数据集)上评估了我们提出的架构,并证明了它在预测个体智力差异方面的有效性。在所有智能预测任务中,我们的模型比现有的相关图架构和其他传统机器学习模型实现了更低的均方误差和更高的相关分数。额叶中回对流体智力和结晶智力都有重要贡献,表明它们在这些认知过程中起着关键作用。总的综合得分确定了一组不同的相关大脑区域,这强调了总体智力的复杂性。我们的GitHub实现可以在https://github.com/bishalth01/BrainRGIN/上公开获得。
{"title":"Brain networks and intelligence: A graph neural network based approach to resting state fMRI data.","authors":"Bishal Thapaliya, Esra Akbas, Jiayu Chen, Ram Sapkota, Bhaskar Ray, Pranav Suresh, Vince D Calhoun, Jingyu Liu","doi":"10.1016/j.media.2024.103433","DOIUrl":"10.1016/j.media.2024.103433","url":null,"abstract":"<p><p>Resting-state functional magnetic resonance imaging (rsfMRI) is a powerful tool for investigating the relationship between brain function and cognitive processes as it allows for the functional organization of the brain to be captured without relying on a specific task or stimuli. In this paper, we present a novel modeling architecture called BrainRGIN for predicting intelligence (fluid, crystallized and total intelligence) using graph neural networks on rsfMRI derived static functional network connectivity matrices. Extending from the existing graph convolution networks, our approach incorporates a clustering-based embedding and graph isomorphism network in the graph convolutional layer to reflect the nature of the brain sub-network organization and efficient network expression, in combination with TopK pooling and attention-based readout functions. We evaluated our proposed architecture on a large dataset, specifically the Adolescent Brain Cognitive Development Dataset, and demonstrated its effectiveness in predicting individual differences in intelligence. Our model achieved lower mean squared errors and higher correlation scores than existing relevant graph architectures and other traditional machine learning models for all of the intelligence prediction tasks. The middle frontal gyrus exhibited a significant contribution to both fluid and crystallized intelligence, suggesting their pivotal role in these cognitive processes. Total composite scores identified a diverse set of brain regions to be relevant which underscores the complex nature of total intelligence. Our GitHub implementation is publicly available on https://github.com/bishalth01/BrainRGIN/.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103433"},"PeriodicalIF":10.7,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142872360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DDoCT: Morphology preserved dual-domain joint optimization for fast sparse-view low-dose CT imaging. DDoCT:形态学保留双域联合优化快速稀疏小剂量CT成像。
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-15 DOI: 10.1016/j.media.2024.103420
Linxuan Li, Zhijie Zhang, Yongqing Li, Yanxin Wang, Wei Zhao

Computed tomography (CT) is continuously becoming a valuable diagnostic technique in clinical practice. However, the radiation dose exposure in the CT scanning process is a public health concern. Within medical diagnoses, mitigating the radiation risk to patients can be achieved by reducing the radiation dose through adjustments in tube current and/or the number of projections. Nevertheless, dose reduction introduces additional noise and artifacts, which have extremely detrimental effects on clinical diagnosis and subsequent analysis. In recent years, the feasibility of applying deep learning methods to low-dose CT (LDCT) imaging has been demonstrated, leading to significant achievements. This article proposes a dual-domain joint optimization LDCT imaging framework (termed DDoCT) which uses noisy sparse-view projection to reconstruct high-performance CT images with joint optimization in projection and image domains. The proposed method not only addresses the noise introduced by reducing tube current, but also pays special attention to issues such as streak artifacts caused by a reduction in the number of projections, enhancing the applicability of DDoCT in practical fast LDCT imaging environments. Experimental results have demonstrated that DDoCT has made significant progress in reducing noise and streak artifacts and enhancing the contrast and clarity of the images.

计算机断层扫描(CT)在临床实践中不断成为一种有价值的诊断技术。然而,CT扫描过程中的辐射剂量暴露是一个公共卫生问题。在医疗诊断中,可以通过调整管电流和/或投射数来减少辐射剂量,从而减轻对患者的辐射风险。然而,剂量减少会引入额外的噪声和伪影,对临床诊断和随后的分析产生极其不利的影响。近年来,将深度学习方法应用于低剂量CT (LDCT)成像的可行性已经得到证实,并取得了重大成果。本文提出了一种双域联合优化LDCT成像框架(DDoCT),该框架利用噪声稀疏视图投影在投影域和图像域进行联合优化,重构高性能CT图像。该方法不仅解决了减小管电流带来的噪声问题,而且特别注意了由于减少投影数而产生的条纹伪影等问题,提高了DDoCT在实际快速LDCT成像环境中的适用性。实验结果表明,DDoCT在降低噪声和条纹伪影,提高图像对比度和清晰度方面取得了显著进展。
{"title":"DDoCT: Morphology preserved dual-domain joint optimization for fast sparse-view low-dose CT imaging.","authors":"Linxuan Li, Zhijie Zhang, Yongqing Li, Yanxin Wang, Wei Zhao","doi":"10.1016/j.media.2024.103420","DOIUrl":"https://doi.org/10.1016/j.media.2024.103420","url":null,"abstract":"<p><p>Computed tomography (CT) is continuously becoming a valuable diagnostic technique in clinical practice. However, the radiation dose exposure in the CT scanning process is a public health concern. Within medical diagnoses, mitigating the radiation risk to patients can be achieved by reducing the radiation dose through adjustments in tube current and/or the number of projections. Nevertheless, dose reduction introduces additional noise and artifacts, which have extremely detrimental effects on clinical diagnosis and subsequent analysis. In recent years, the feasibility of applying deep learning methods to low-dose CT (LDCT) imaging has been demonstrated, leading to significant achievements. This article proposes a dual-domain joint optimization LDCT imaging framework (termed DDoCT) which uses noisy sparse-view projection to reconstruct high-performance CT images with joint optimization in projection and image domains. The proposed method not only addresses the noise introduced by reducing tube current, but also pays special attention to issues such as streak artifacts caused by a reduction in the number of projections, enhancing the applicability of DDoCT in practical fast LDCT imaging environments. Experimental results have demonstrated that DDoCT has made significant progress in reducing noise and streak artifacts and enhancing the contrast and clarity of the images.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103420"},"PeriodicalIF":10.7,"publicationDate":"2024-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142872369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AutoFOX: An automated cross-modal 3D fusion framework of coronary X-ray angiography and OCT. AutoFOX:冠状动脉x线血管造影和OCT的自动交叉模态3D融合框架。
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-15 DOI: 10.1016/j.media.2024.103432
Chunming Li, Yuchuan Qiao, Wei Yu, Yingguang Li, Yankai Chen, Zehao Fan, Runguo Wei, Botao Yang, Zhiqing Wang, Xuesong Lu, Lianglong Chen, Carlos Collet, Miao Chu, Shengxian Tu

Coronary artery disease (CAD) is the leading cause of death globally. The 3D fusion of coronary X-ray angiography (XA) and optical coherence tomography (OCT) provides complementary information to appreciate coronary anatomy and plaque morphology. This significantly improve CAD diagnosis and prognosis by enabling precise hemodynamic and computational physiology assessments. The challenges of fusion lie in the potential misalignment caused by the foreshortening effect in XA and non-uniform acquisition of OCT pullback. Moreover, the need for reconstructions of major bifurcations is technically demanding. This paper proposed an automated 3D fusion framework AutoFOX, which consists of deep learning model TransCAN for 3D vessel alignment. The 3D vessel contours are processed as sequential data, whose features are extracted and integrated with bifurcation information to enhance alignment via a multi-task fashion. TransCAN shows the highest alignment accuracy among all methods with a mean alignment error of 0.99 ± 0.81 mm along the vascular sequence, and only 0.82 ± 0.69 mm at key anatomical positions. The proposed AutoFOX framework uniquely employs an advanced side branch lumen reconstruction algorithm to enhance the assessment of bifurcation lesions. A multi-center dataset is utilized for independent external validation, using the paired 3D coronary computer tomography angiography (CTA) as the reference standard. Novel morphological metrics are proposed to evaluate the fusion accuracy. Our experiments show that the fusion model generated by AutoFOX exhibits high morphological consistency with CTA. AutoFOX framework enables automatic and comprehensive assessment of CAD, especially for the accurate assessment of bifurcation stenosis, which is of clinical value to guiding procedure and optimization.

冠状动脉疾病(CAD)是全球死亡的主要原因。冠状动脉x线血管造影(XA)和光学相干断层扫描(OCT)的三维融合为了解冠状动脉解剖和斑块形态提供了补充信息。通过精确的血流动力学和计算生理学评估,这显著改善了CAD的诊断和预后。融合的挑战在于XA的预缩效应和OCT回拉的不均匀获取所引起的潜在错位。此外,重建主要分岔的需要在技术上要求很高。本文提出了一种自动三维融合框架AutoFOX,该框架由深度学习模型TransCAN组成,用于三维血管对准。三维血管轮廓作为连续数据进行处理,提取其特征并与分岔信息集成,以通过多任务方式增强对齐。在所有方法中,TransCAN的对准精度最高,沿血管序列的平均对准误差为0.99±0.81 mm,关键解剖位置的平均对准误差仅为0.82±0.69 mm。提出的AutoFOX框架独特地采用了先进的侧分支管腔重建算法来增强对分叉病变的评估。使用多中心数据集进行独立的外部验证,以配对的3D冠状动脉计算机断层扫描血管造影(CTA)作为参考标准。提出了新的形态学指标来评估融合精度。实验表明,AutoFOX生成的融合模型与CTA具有较高的形态学一致性。AutoFOX框架可实现CAD的自动、全面评估,特别是对分叉性狭窄的准确评估,对指导手术和优化具有临床价值。
{"title":"AutoFOX: An automated cross-modal 3D fusion framework of coronary X-ray angiography and OCT.","authors":"Chunming Li, Yuchuan Qiao, Wei Yu, Yingguang Li, Yankai Chen, Zehao Fan, Runguo Wei, Botao Yang, Zhiqing Wang, Xuesong Lu, Lianglong Chen, Carlos Collet, Miao Chu, Shengxian Tu","doi":"10.1016/j.media.2024.103432","DOIUrl":"https://doi.org/10.1016/j.media.2024.103432","url":null,"abstract":"<p><p>Coronary artery disease (CAD) is the leading cause of death globally. The 3D fusion of coronary X-ray angiography (XA) and optical coherence tomography (OCT) provides complementary information to appreciate coronary anatomy and plaque morphology. This significantly improve CAD diagnosis and prognosis by enabling precise hemodynamic and computational physiology assessments. The challenges of fusion lie in the potential misalignment caused by the foreshortening effect in XA and non-uniform acquisition of OCT pullback. Moreover, the need for reconstructions of major bifurcations is technically demanding. This paper proposed an automated 3D fusion framework AutoFOX, which consists of deep learning model TransCAN for 3D vessel alignment. The 3D vessel contours are processed as sequential data, whose features are extracted and integrated with bifurcation information to enhance alignment via a multi-task fashion. TransCAN shows the highest alignment accuracy among all methods with a mean alignment error of 0.99 ± 0.81 mm along the vascular sequence, and only 0.82 ± 0.69 mm at key anatomical positions. The proposed AutoFOX framework uniquely employs an advanced side branch lumen reconstruction algorithm to enhance the assessment of bifurcation lesions. A multi-center dataset is utilized for independent external validation, using the paired 3D coronary computer tomography angiography (CTA) as the reference standard. Novel morphological metrics are proposed to evaluate the fusion accuracy. Our experiments show that the fusion model generated by AutoFOX exhibits high morphological consistency with CTA. AutoFOX framework enables automatic and comprehensive assessment of CAD, especially for the accurate assessment of bifurcation stenosis, which is of clinical value to guiding procedure and optimization.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103432"},"PeriodicalIF":10.7,"publicationDate":"2024-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142864791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Biomechanical modeling combined with pressure-volume loop analysis to aid surgical planning in patients with complex congenital heart disease 生物力学模型结合压力-体积环分析帮助复杂先天性心脏病患者的手术计划
IF 10.9 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-15 DOI: 10.1016/j.media.2024.103441
Maria Gusseva, Nikhil Thatte, Daniel A. Castellanos, Peter E. Hammer, Sunil J. Ghelani, Ryan Callahan, Tarique Hussain, Radomír Chabiniok
Patients with congenitally corrected transposition of the great arteries (ccTGA) can be treated with a double switch operation (DSO) to restore the normal anatomical connection of the left ventricle (LV) to the systemic circulation and the right ventricle (RV) to the pulmonary circulation. The subpulmonary LV progressively deconditions over time due to its connection to the low pressure pulmonary circulation and needs to be retrained using a surgical pulmonary artery band (PAB) for 6–12 months prior to the DSO. The subsequent clinical follow-up, consisting of invasive cardiac pressure and non-invasive imaging data, evaluates LV preparedness for the DSO. Evaluation using standard clinical techniques has led to unacceptable LV failure rates of ∼15 % after DSO. We propose a computational modeling framework to (1) reconstruct LV and RV pressure-volume (PV) loops from non-simultaneously acquired imaging and pressure data and gather model-derived mechanical indicators of ventricular function; and (2) perform in silico DSO to predict the functional response of the LV when connected to the high-pressure systemic circulation.
先天性大动脉转位(ccTGA)患者可以通过双开关手术(DSO)来恢复左心室(LV)与体循环和右心室(RV)与肺循环的正常解剖连接。由于肺动脉下左室与低压肺循环的连接,随着时间的推移,左室逐渐恶化,需要在DSO前6-12个月使用外科肺动脉带(PAB)进行再训练。随后的临床随访,包括有创心脏压力和无创成像数据,评估左室准备的DSO。使用标准临床技术的评估导致DSO后不可接受的左室失败率约为15%。我们提出了一个计算建模框架:(1)从非同时获取的成像和压力数据重建左室和右室压力-容积(PV)回路,并收集模型导出的心室功能力学指标;(2)用硅DSO预测左室与高压体循环连接时的功能响应。
{"title":"Biomechanical modeling combined with pressure-volume loop analysis to aid surgical planning in patients with complex congenital heart disease","authors":"Maria Gusseva, Nikhil Thatte, Daniel A. Castellanos, Peter E. Hammer, Sunil J. Ghelani, Ryan Callahan, Tarique Hussain, Radomír Chabiniok","doi":"10.1016/j.media.2024.103441","DOIUrl":"https://doi.org/10.1016/j.media.2024.103441","url":null,"abstract":"Patients with congenitally corrected transposition of the great arteries (ccTGA) can be treated with a double switch operation (DSO) to restore the normal anatomical connection of the left ventricle (LV) to the systemic circulation and the right ventricle (RV) to the pulmonary circulation. The subpulmonary LV progressively deconditions over time due to its connection to the low pressure pulmonary circulation and needs to be retrained using a surgical pulmonary artery band (PAB) for 6–12 months prior to the DSO. The subsequent clinical follow-up, consisting of invasive cardiac pressure and non-invasive imaging data, evaluates LV preparedness for the DSO. Evaluation using standard clinical techniques has led to unacceptable LV failure rates of ∼15 % after DSO. We propose a computational modeling framework to (1) reconstruct LV and RV pressure-volume (PV) loops from non-simultaneously acquired imaging and pressure data and gather model-derived mechanical indicators of ventricular function; and (2) perform <ce:italic>in silico</ce:italic> DSO to predict the functional response of the LV when connected to the high-pressure systemic circulation.","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"139 1","pages":""},"PeriodicalIF":10.9,"publicationDate":"2024-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142874009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Organ-level instance segmentation enables continuous time-space-spectrum analysis of pre-clinical abdominal photoacoustic tomography images. 器官水平的实例分割使临床前腹部光声断层扫描图像的连续时-空频谱分析成为可能。
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-12 DOI: 10.1016/j.media.2024.103402
Zhichao Liang, Shuangyang Zhang, Zongxin Mo, Xiaoming Zhang, Anqi Wei, Wufan Chen, Li Qi

Photoacoustic tomography (PAT), as a novel biomedical imaging technique, is able to capture temporal, spatial and spectral tomographic information from organisms. Organ-level multi-parametric analysis of continuous PAT images are of interest since it enables the quantification of organ specific morphological and functional parameters in small animals. Accurate organ delineation is imperative for organ-level image analysis, yet the low contrast and blurred organ boundaries in PAT images pose challenge for their precise segmentation. Fortunately, shared structural information among continuous images in the time-space-spectrum domain may be used to enhance segmentation. In this paper, we introduce a structure fusion enhanced graph convolutional network (SFE-GCN), which aims at automatically segmenting major organs including the body, liver, kidneys, spleen, vessel and spine of abdominal PAT image of mice. SFE-GCN enhances the structural feature of organs by fusing information in continuous image sequence captured at time, space and spectrum domains. As validated on large-scale datasets across different imaging scenarios, our method not only preserves fine structural details but also ensures anatomically aligned organ contours. Most importantly, this study explores the application of SFE-GCN in multi-dimensional organ image analysis, including organ-based dynamic morphological analysis, organ-wise light fluence correction and segmentation-enhanced spectral un-mixing. Code will be released at https://github.com/lzc-smu/SFEGCN.git.

光声断层成像(PAT)作为一种新型生物医学成像技术,能够捕捉生物体的时间、空间和光谱断层信息。对连续的光声层析成像进行器官级多参数分析很有意义,因为它可以量化小动物特定器官的形态和功能参数。准确的器官划分是器官级图像分析的当务之急,然而,PAT 图像中的低对比度和模糊的器官边界对其精确分割构成了挑战。幸运的是,时空-频谱域中连续图像之间共享的结构信息可用于增强分割效果。本文介绍了一种结构融合增强图卷积网络(SFE-GCN),旨在自动分割小鼠腹部 PAT 图像中的主要器官,包括身体、肝脏、肾脏、脾脏、血管和脊柱。SFE-GCN 通过融合在时域、空间域和频谱域捕获的连续图像序列中的信息,增强器官的结构特征。通过在不同成像场景的大规模数据集上进行验证,我们的方法不仅保留了精细的结构细节,还确保了器官轮廓在解剖学上的一致性。最重要的是,这项研究探索了 SFE-GCN 在多维器官图像分析中的应用,包括基于器官的动态形态分析、器官光通量校正和分割增强光谱非混合。代码将在 https://github.com/lzc-smu/SFEGCN.git 上发布。
{"title":"Organ-level instance segmentation enables continuous time-space-spectrum analysis of pre-clinical abdominal photoacoustic tomography images.","authors":"Zhichao Liang, Shuangyang Zhang, Zongxin Mo, Xiaoming Zhang, Anqi Wei, Wufan Chen, Li Qi","doi":"10.1016/j.media.2024.103402","DOIUrl":"https://doi.org/10.1016/j.media.2024.103402","url":null,"abstract":"<p><p>Photoacoustic tomography (PAT), as a novel biomedical imaging technique, is able to capture temporal, spatial and spectral tomographic information from organisms. Organ-level multi-parametric analysis of continuous PAT images are of interest since it enables the quantification of organ specific morphological and functional parameters in small animals. Accurate organ delineation is imperative for organ-level image analysis, yet the low contrast and blurred organ boundaries in PAT images pose challenge for their precise segmentation. Fortunately, shared structural information among continuous images in the time-space-spectrum domain may be used to enhance segmentation. In this paper, we introduce a structure fusion enhanced graph convolutional network (SFE-GCN), which aims at automatically segmenting major organs including the body, liver, kidneys, spleen, vessel and spine of abdominal PAT image of mice. SFE-GCN enhances the structural feature of organs by fusing information in continuous image sequence captured at time, space and spectrum domains. As validated on large-scale datasets across different imaging scenarios, our method not only preserves fine structural details but also ensures anatomically aligned organ contours. Most importantly, this study explores the application of SFE-GCN in multi-dimensional organ image analysis, including organ-based dynamic morphological analysis, organ-wise light fluence correction and segmentation-enhanced spectral un-mixing. Code will be released at https://github.com/lzc-smu/SFEGCN.git.</p>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"101 ","pages":"103402"},"PeriodicalIF":10.7,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142846732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Medical image analysis
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1