首页 > 最新文献

Computerized Medical Imaging and Graphics最新文献

英文 中文
Portable head CT motion artifact correction via diffusion-based generative model 基于扩散生成模型的便携式头部CT运动伪影校正。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-01 DOI: 10.1016/j.compmedimag.2024.102478
Zhennong Chen , Siyeop Yoon , Quirin Strotzer , Rehab Naeem Khalid , Matthew Tivnan , Quanzheng Li , Rajiv Gupta , Dufan Wu
Portable head CT images often suffer motion artifacts due to the prolonged scanning time and critically ill patients who are unable to hold still. Image-domain motion correction is attractive for this application as it does not require CT projection data. This paper describes and evaluates a generative model based on conditional diffusion to correct motion artifacts in portable head CT scans. This model was trained to find the motion-free CT image conditioned on the paired motion-corrupted image. Our method utilizes histogram equalization to resolve the intensity range discrepancy of skull and brain tissue and an advanced Elucidated Diffusion Model (EDM) framework for faster sampling and better motion correction performance. Our EDM framework is superior in correcting artifacts in the brain tissue region and across the entire image compared to CNN-based methods and standard diffusion approach (DDPM) in a simulation study and a phantom study with known motion-free ground truth. Furthermore, we conducted a reader study on real-world portable CT scans to demonstrate improvement of image quality using our method.
便携式头部CT图像由于扫描时间过长和危重病人无法保持静止,经常出现运动伪影。由于不需要CT投影数据,图像域运动校正对该应用很有吸引力。本文描述并评估了一种基于条件扩散的生成模型,用于校正便携式头部CT扫描中的运动伪影。对该模型进行训练,找出以成对的运动损坏图像为条件的无运动CT图像。该方法利用直方图均衡化来解决颅骨和脑组织的强度范围差异,并采用先进的阐明扩散模型(EDM)框架来实现更快的采样和更好的运动校正性能。与基于cnn的方法和标准扩散方法(DDPM)相比,我们的EDM框架在模拟研究和已知无运动地面真相的幻影研究中,在纠正脑组织区域和整个图像中的伪影方面更胜一筹。此外,我们对真实世界的便携式CT扫描进行了读者研究,以证明使用我们的方法可以改善图像质量。
{"title":"Portable head CT motion artifact correction via diffusion-based generative model","authors":"Zhennong Chen ,&nbsp;Siyeop Yoon ,&nbsp;Quirin Strotzer ,&nbsp;Rehab Naeem Khalid ,&nbsp;Matthew Tivnan ,&nbsp;Quanzheng Li ,&nbsp;Rajiv Gupta ,&nbsp;Dufan Wu","doi":"10.1016/j.compmedimag.2024.102478","DOIUrl":"10.1016/j.compmedimag.2024.102478","url":null,"abstract":"<div><div>Portable head CT images often suffer motion artifacts due to the prolonged scanning time and critically ill patients who are unable to hold still. Image-domain motion correction is attractive for this application as it does not require CT projection data. This paper describes and evaluates a generative model based on conditional diffusion to correct motion artifacts in portable head CT scans. This model was trained to find the motion-free CT image conditioned on the paired motion-corrupted image. Our method utilizes histogram equalization to resolve the intensity range discrepancy of skull and brain tissue and an advanced Elucidated Diffusion Model (EDM) framework for faster sampling and better motion correction performance. Our EDM framework is superior in correcting artifacts in the brain tissue region and across the entire image compared to CNN-based methods and standard diffusion approach (DDPM) in a simulation study and a phantom study with known motion-free ground truth. Furthermore, we conducted a reader study on real-world portable CT scans to demonstrate improvement of image quality using our method.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"119 ","pages":"Article 102478"},"PeriodicalIF":5.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142873358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inspect quantitative signals in placental histopathology: Computer-assisted multiple functional tissues identification through multi-model fusion and distillation framework 检测胎盘组织病理学中的定量信号:通过多模型融合和蒸馏框架进行计算机辅助的多功能组织识别。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-01 DOI: 10.1016/j.compmedimag.2024.102482
Yiming Liu , Ling Zhang , Mingxue Gu , Yaoxing Xiao , Ting Yu , Xiang Tao , Qing Zhang , Yan Wang , Dinggang Shen , Qingli Li
Pathological analysis of placenta is currently a valuable tool for gaining insights into pregnancy outcomes. In placental histopathology, multiple functional tissues can be inspected as potential signals reflecting the transfer functionality between fetal and maternal circulations. However, the identification of multiple functional tissues is challenging due to (1) severe heterogeneity in texture, size and shape, (2) distribution across different scales and (3) the need for comprehensive assessment at the whole slide image (WSI) level. To solve aforementioned problems, we establish a brand new dataset and propose a computer-aided segmentation framework through multi-model fusion and distillation to identify multiple functional tissues in placental histopathologic images, including villi, capillaries, fibrin deposits and trophoblast aggregations. Specifically, we propose a two-stage Multi-model Fusion and Distillation (MMFD) framework. Considering the multi-scale distribution and heterogeneity of multiple functional tissues, we enhance the visual representation in the first stage by fusing feature from multiple models to boost the effectiveness of the network. However, the multi-model fusion stage contributes to extra parameters and a significant computational burden, which is impractical for recognizing gigapixels of WSIs within clinical practice. In the second stage, we propose straightforward plug-in feature distillation method that transfers knowledge from the large fused model to a compact student model. In self-collected placental dataset, our proposed MMFD framework demonstrates an improvement of 4.3% in mean Intersection over Union (mIoU) while achieving an approximate 50% increase in inference speed and utilizing only 10% of parameters and computational resources, compared to the parameter-efficient fine-tuned Segment Anything Model (SAM) baseline. Visualization of segmentation results across entire WSIs on unseen cases demonstrates the generalizability of our proposed MMFD framework. Besides, experimental results on a public dataset further prove the effectiveness of MMFD framework on other tasks. Our work can present a fundamental method to expedite quantitative analysis of placental histopathology.
目前,胎盘病理分析是了解妊娠结局的一种有价值的工具。在胎盘组织病理学中,可以检查多个功能组织作为反映胎儿和母体循环之间转移功能的潜在信号。然而,由于(1)纹理、大小和形状的严重异质性,(2)不同尺度的分布,(3)需要在整个幻灯片图像(WSI)水平上进行综合评估,对多种功能组织的识别具有挑战性。为了解决上述问题,我们建立了一个全新的数据集,并通过多模型融合和精馏提出了一个计算机辅助分割框架,以识别胎盘组织病理图像中的多种功能组织,包括绒毛、毛细血管、纤维蛋白沉积和滋养细胞聚集。具体来说,我们提出了一个两阶段的多模型融合和蒸馏(MMFD)框架。考虑到多个功能组织的多尺度分布和异质性,我们在第一阶段通过融合多个模型的特征来增强视觉表征,以提高网络的有效性。然而,多模型融合阶段会带来额外的参数和巨大的计算负担,这对于临床实践中识别千兆像素的wsi是不切实际的。在第二阶段,我们提出了直接的插件特征蒸馏方法,将知识从大型融合模型转移到紧凑的学生模型。在自我收集的胎盘数据集中,与参数高效的微调分段任意模型(SAM)基线相比,我们提出的MMFD框架在平均交叉交叉(mIoU)上提高了4.3%,同时在推理速度上提高了约50%,仅利用了10%的参数和计算资源。对未见案例的整个wsi分割结果的可视化证明了我们提出的MMFD框架的通用性。此外,在公共数据集上的实验结果进一步证明了MMFD框架在其他任务上的有效性。我们的工作为加快胎盘组织病理学定量分析提供了一种基本方法。
{"title":"Inspect quantitative signals in placental histopathology: Computer-assisted multiple functional tissues identification through multi-model fusion and distillation framework","authors":"Yiming Liu ,&nbsp;Ling Zhang ,&nbsp;Mingxue Gu ,&nbsp;Yaoxing Xiao ,&nbsp;Ting Yu ,&nbsp;Xiang Tao ,&nbsp;Qing Zhang ,&nbsp;Yan Wang ,&nbsp;Dinggang Shen ,&nbsp;Qingli Li","doi":"10.1016/j.compmedimag.2024.102482","DOIUrl":"10.1016/j.compmedimag.2024.102482","url":null,"abstract":"<div><div>Pathological analysis of placenta is currently a valuable tool for gaining insights into pregnancy outcomes. In placental histopathology, multiple functional tissues can be inspected as potential signals reflecting the transfer functionality between fetal and maternal circulations. However, the identification of multiple functional tissues is challenging due to (1) severe heterogeneity in texture, size and shape, (2) distribution across different scales and (3) the need for comprehensive assessment at the whole slide image (WSI) level. To solve aforementioned problems, we establish a brand new dataset and propose a computer-aided segmentation framework through multi-model fusion and distillation to identify multiple functional tissues in placental histopathologic images, including villi, capillaries, fibrin deposits and trophoblast aggregations. Specifically, we propose a two-stage Multi-model Fusion and Distillation (MMFD) framework. Considering the multi-scale distribution and heterogeneity of multiple functional tissues, we enhance the visual representation in the first stage by fusing feature from multiple models to boost the effectiveness of the network. However, the multi-model fusion stage contributes to extra parameters and a significant computational burden, which is impractical for recognizing gigapixels of WSIs within clinical practice. In the second stage, we propose straightforward plug-in feature distillation method that transfers knowledge from the large fused model to a compact student model. In self-collected placental dataset, our proposed MMFD framework demonstrates an improvement of 4.3% in mean Intersection over Union (mIoU) while achieving an approximate 50% increase in inference speed and utilizing only 10% of parameters and computational resources, compared to the parameter-efficient fine-tuned Segment Anything Model (SAM) baseline. Visualization of segmentation results across entire WSIs on unseen cases demonstrates the generalizability of our proposed MMFD framework. Besides, experimental results on a public dataset further prove the effectiveness of MMFD framework on other tasks. Our work can present a fundamental method to expedite quantitative analysis of placental histopathology.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"119 ","pages":"Article 102482"},"PeriodicalIF":5.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142923514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Guidelines for cerebrovascular segmentation: Managing imperfect annotations in the context of semi-supervised learning 脑血管分割指南:在半监督学习的背景下管理不完善的注释。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-01 DOI: 10.1016/j.compmedimag.2024.102474
Pierre Rougé , Pierre-Henri Conze , Nicolas Passat , Odyssée Merveille
Segmentation in medical imaging is an essential and often preliminary task in the image processing chain, driving numerous efforts towards the design of robust segmentation algorithms. Supervised learning methods achieve excellent performances when fed with a sufficient amount of labeled data. However, such labels are typically highly time-consuming, error-prone and expensive to produce. Alternatively, semi-supervised learning approaches leverage both labeled and unlabeled data, and are very useful when only a small fraction of the dataset is labeled. They are particularly useful for cerebrovascular segmentation, given that labeling a single volume requires several hours for an expert. In addition to the challenge posed by insufficient annotations, there are concerns regarding annotation consistency. The task of annotating the cerebrovascular tree is inherently ambiguous. Due to the discrete nature of images, the borders and extremities of vessels are often unclear. Consequently, annotations heavily rely on the expert subjectivity and on the underlying clinical objective. These discrepancies significantly increase the complexity of the segmentation task for the model and consequently impair the results. Consequently, it becomes imperative to provide clinicians with precise guidelines to improve the annotation process and construct more uniform datasets. In this article, we investigate the data dependency of deep learning methods within the context of imperfect data and semi-supervised learning, for cerebrovascular segmentation. Specifically, this study compares various state-of-the-art semi-supervised methods based on unsupervised regularization and evaluates their performance in diverse quantity and quality data scenarios. Based on these experiments, we provide guidelines for the annotation and training of cerebrovascular segmentation models.
医学成像中的分割是图像处理链中的一项基本且通常是初步的任务,它推动了对鲁棒分割算法设计的大量努力。当有足够数量的标记数据时,监督学习方法可以获得出色的性能。然而,这种标签通常非常耗时,容易出错,而且生产成本昂贵。或者,半监督学习方法利用标记和未标记的数据,并且在只有一小部分数据集被标记时非常有用。它们对脑血管分割特别有用,因为专家标记单个体积需要几个小时。除了注释不足带来的挑战之外,还有关于注释一致性的问题。注释脑血管树的任务本质上是模棱两可的。由于图像的离散性,血管的边界和末端往往是不清楚的。因此,注释严重依赖于专家的主观性和潜在的临床目标。这些差异大大增加了模型分割任务的复杂性,从而影响了结果。因此,必须为临床医生提供精确的指导,以改进注释过程并构建更统一的数据集。在本文中,我们研究了在不完全数据和半监督学习的背景下,深度学习方法对脑血管分割的数据依赖性。具体而言,本研究比较了基于无监督正则化的各种最先进的半监督方法,并评估了它们在不同数量和质量数据场景下的性能。基于这些实验,我们为脑血管分割模型的标注和训练提供了指导。
{"title":"Guidelines for cerebrovascular segmentation: Managing imperfect annotations in the context of semi-supervised learning","authors":"Pierre Rougé ,&nbsp;Pierre-Henri Conze ,&nbsp;Nicolas Passat ,&nbsp;Odyssée Merveille","doi":"10.1016/j.compmedimag.2024.102474","DOIUrl":"10.1016/j.compmedimag.2024.102474","url":null,"abstract":"<div><div>Segmentation in medical imaging is an essential and often preliminary task in the image processing chain, driving numerous efforts towards the design of robust segmentation algorithms. Supervised learning methods achieve excellent performances when fed with a sufficient amount of labeled data. However, such labels are typically highly time-consuming, error-prone and expensive to produce. Alternatively, semi-supervised learning approaches leverage both labeled and unlabeled data, and are very useful when only a small fraction of the dataset is labeled. They are particularly useful for cerebrovascular segmentation, given that labeling a single volume requires several hours for an expert. In addition to the challenge posed by insufficient annotations, there are concerns regarding annotation consistency. The task of annotating the cerebrovascular tree is inherently ambiguous. Due to the discrete nature of images, the borders and extremities of vessels are often unclear. Consequently, annotations heavily rely on the expert subjectivity and on the underlying clinical objective. These discrepancies significantly increase the complexity of the segmentation task for the model and consequently impair the results. Consequently, it becomes imperative to provide clinicians with precise guidelines to improve the annotation process and construct more uniform datasets. In this article, we investigate the data dependency of deep learning methods within the context of imperfect data and semi-supervised learning, for cerebrovascular segmentation. Specifically, this study compares various state-of-the-art semi-supervised methods based on unsupervised regularization and evaluates their performance in diverse quantity and quality data scenarios. Based on these experiments, we provide guidelines for the annotation and training of cerebrovascular segmentation models.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"119 ","pages":"Article 102474"},"PeriodicalIF":5.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142873357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A review of the Segment Anything Model (SAM) for medical image analysis: Accomplishments and perspectives 医学影像分析的片段任意模型(SAM):成就与展望。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-01 DOI: 10.1016/j.compmedimag.2024.102473
Mudassar Ali , Tong Wu , Haoji Hu , Qiong Luo , Dong Xu , Weizeng Zheng , Neng Jin , Chen Yang , Jincao Yao
The purpose of this paper is to provide an overview of the developments that have occurred in the Segment Anything Model (SAM) within the medical image segmentation category over the course of the past year. However, SAM has demonstrated notable achievements in adapting to medical image segmentation tasks through fine-tuning on medical datasets, transitioning from 2D to 3D datasets, and optimizing prompting engineering. This is despite the fact that direct application on medical datasets has shown mixed results. Despite the difficulties, the paper emphasizes the significant potential that SAM possesses in the field of medical segmentation. One of the suggested directions for the future is to investigate the construction of large-scale datasets, to address multi-modal and multi-scale information, to integrate with semi-supervised learning structures, and to extend the application methods of SAM in clinical settings. In addition to making a significant contribution to the field of medical segmentation.
本文的目的是提供在过去一年中医学图像分割类别中细分任何模型(SAM)的发展概况。然而,SAM通过对医学数据集进行微调、从2D数据集过渡到3D数据集以及优化提示工程,在适应医学图像分割任务方面取得了显著成就。尽管在医疗数据集上的直接应用显示出喜忧参半的结果。尽管存在困难,但本文强调了SAM在医学分割领域具有的巨大潜力。未来的研究方向之一是研究大规模数据集的构建,解决多模态和多尺度的信息,与半监督学习结构相结合,并扩展SAM在临床环境中的应用方法。除了在医疗细分领域做出重大贡献之外。
{"title":"A review of the Segment Anything Model (SAM) for medical image analysis: Accomplishments and perspectives","authors":"Mudassar Ali ,&nbsp;Tong Wu ,&nbsp;Haoji Hu ,&nbsp;Qiong Luo ,&nbsp;Dong Xu ,&nbsp;Weizeng Zheng ,&nbsp;Neng Jin ,&nbsp;Chen Yang ,&nbsp;Jincao Yao","doi":"10.1016/j.compmedimag.2024.102473","DOIUrl":"10.1016/j.compmedimag.2024.102473","url":null,"abstract":"<div><div>The purpose of this paper is to provide an overview of the developments that have occurred in the Segment Anything Model (SAM) within the medical image segmentation category over the course of the past year. However, SAM has demonstrated notable achievements in adapting to medical image segmentation tasks through fine-tuning on medical datasets, transitioning from 2D to 3D datasets, and optimizing prompting engineering. This is despite the fact that direct application on medical datasets has shown mixed results. Despite the difficulties, the paper emphasizes the significant potential that SAM possesses in the field of medical segmentation. One of the suggested directions for the future is to investigate the construction of large-scale datasets, to address multi-modal and multi-scale information, to integrate with semi-supervised learning structures, and to extend the application methods of SAM in clinical settings. In addition to making a significant contribution to the field of medical segmentation.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"119 ","pages":"Article 102473"},"PeriodicalIF":5.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142824125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multi-view contrastive learning and semi-supervised self-distillation framework for early recurrence prediction in ovarian cancer 用于卵巢癌早期复发预测的多视角对比学习和半监督自馏框架
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-01 DOI: 10.1016/j.compmedimag.2024.102477
Chi Dong , Yujiao Wu , Bo Sun , Jiayi Bo , Yufei Huang , Yikang Geng , Qianhui Zhang , Ruixiang Liu , Wei Guo , Xingling Wang , Xiran Jiang

Objective

This study presents a novel framework that integrates contrastive learning and knowledge distillation to improve early ovarian cancer (OC) recurrence prediction, addressing the challenges posed by limited labeled data and tumor heterogeneity.

Methods

The research utilized CT imaging data from 585 OC patients, including 142 cases with complete follow-up information and 125 cases with unknown recurrence status. To pre-train the teacher network, 318 unlabeled images were sourced from public datasets (TCGA-OV and PLAGH-202-OC). Multi-view contrastive learning (MVCL) was employed to generate multi-view 2D tumor slices, enhancing the teacher network’s ability to extract features from complex, heterogeneous tumors with high intra-class variability. Building on this foundation, the proposed semi-supervised multi-task self-distillation (Semi-MTSD) framework integrated OC subtyping as an auxiliary task using multi-task learning (MTL). This approach allowed the co-training of a student network for recurrence prediction, leveraging both labeled and unlabeled data to improve predictive performance in data-limited settings. The student network's performance was assessed using preoperative CT images with known recurrence outcomes. Evaluation metrics included area under the receiver operating characteristic curve (AUC), accuracy (ACC), sensitivity (SEN), specificity (SPE), F1 score, floating-point operations (FLOPs), parameter count, training time, inference time, and mean corruption error (mCE).

Results

The proposed framework achieved an ACC of 0.862, an AUC of 0.916, a SPE of 0.895, and an F1 score of 0.831, surpassing existing methods for OC recurrence prediction. Comparative and ablation studies validated the model’s robustness, particularly in scenarios characterized by data scarcity and tumor heterogeneity.

Conclusion

The MVCL and Semi-MTSD framework demonstrates significant advancements in OC recurrence prediction, showcasing strong generalization capabilities in complex, data-constrained environments. This approach offers a promising pathway toward more personalized treatment strategies for OC patients.
目的:本研究提出了一个整合对比学习和知识升华的新框架,以提高早期卵巢癌(OC)复发预测,解决标记数据有限和肿瘤异质性带来的挑战。方法:利用585例OC患者的CT影像资料,其中随访信息完整的142例,复发情况不明的125例。为了对教师网络进行预训练,318张未标记的图像来自公共数据集(TCGA-OV和PLAGH-202-OC)。采用多视图对比学习(MVCL)生成多视图二维肿瘤切片,增强教师网络从复杂、异质性、班级内变异性高的肿瘤中提取特征的能力。在此基础上,提出的半监督多任务自蒸馏(Semi-MTSD)框架利用多任务学习(MTL)将OC子分类作为辅助任务集成。这种方法允许对学生网络进行重复预测的共同训练,利用标记和未标记的数据来提高数据有限设置下的预测性能。使用术前CT图像和已知的复发结果来评估学生网络的表现。评估指标包括接收者工作特征曲线下面积(AUC)、准确性(ACC)、灵敏度(SEN)、特异性(SPE)、F1评分、浮点运算(FLOPs)、参数计数、训练时间、推理时间和平均损坏误差(mCE)。结果:该框架的ACC为0.862,AUC为0.916,SPE为0.895,F1评分为0.831,优于现有的卵巢癌复发预测方法。对比研究和消融研究证实了该模型的稳健性,特别是在数据稀缺和肿瘤异质性的情况下。结论:MVCL和半mtsd框架在预测卵巢癌复发方面取得了重大进展,在复杂的、数据受限的环境中显示出强大的泛化能力。这种方法为卵巢癌患者的个性化治疗策略提供了一条有希望的途径。
{"title":"A multi-view contrastive learning and semi-supervised self-distillation framework for early recurrence prediction in ovarian cancer","authors":"Chi Dong ,&nbsp;Yujiao Wu ,&nbsp;Bo Sun ,&nbsp;Jiayi Bo ,&nbsp;Yufei Huang ,&nbsp;Yikang Geng ,&nbsp;Qianhui Zhang ,&nbsp;Ruixiang Liu ,&nbsp;Wei Guo ,&nbsp;Xingling Wang ,&nbsp;Xiran Jiang","doi":"10.1016/j.compmedimag.2024.102477","DOIUrl":"10.1016/j.compmedimag.2024.102477","url":null,"abstract":"<div><h3>Objective</h3><div>This study presents a novel framework that integrates contrastive learning and knowledge distillation to improve early ovarian cancer (OC) recurrence prediction, addressing the challenges posed by limited labeled data and tumor heterogeneity.</div></div><div><h3>Methods</h3><div>The research utilized CT imaging data from 585 OC patients, including 142 cases with complete follow-up information and 125 cases with unknown recurrence status. To pre-train the teacher network, 318 unlabeled images were sourced from public datasets (TCGA-OV and PLAGH-202-OC). Multi-view contrastive learning (MVCL) was employed to generate multi-view 2D tumor slices, enhancing the teacher network’s ability to extract features from complex, heterogeneous tumors with high intra-class variability. Building on this foundation, the proposed semi-supervised multi-task self-distillation (Semi-MTSD) framework integrated OC subtyping as an auxiliary task using multi-task learning (MTL). This approach allowed the co-training of a student network for recurrence prediction, leveraging both labeled and unlabeled data to improve predictive performance in data-limited settings. The student network's performance was assessed using preoperative CT images with known recurrence outcomes. Evaluation metrics included area under the receiver operating characteristic curve (AUC), accuracy (ACC), sensitivity (SEN), specificity (SPE), F1 score, floating-point operations (FLOPs), parameter count, training time, inference time, and mean corruption error (mCE).</div></div><div><h3>Results</h3><div>The proposed framework achieved an ACC of 0.862, an AUC of 0.916, a SPE of 0.895, and an F1 score of 0.831, surpassing existing methods for OC recurrence prediction. Comparative and ablation studies validated the model’s robustness, particularly in scenarios characterized by data scarcity and tumor heterogeneity.</div></div><div><h3>Conclusion</h3><div>The MVCL and Semi-MTSD framework demonstrates significant advancements in OC recurrence prediction, showcasing strong generalization capabilities in complex, data-constrained environments. This approach offers a promising pathway toward more personalized treatment strategies for OC patients.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"119 ","pages":"Article 102477"},"PeriodicalIF":5.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142824120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Utilizing domain knowledge to improve the classification of intravenous contrast phase of CT scans 利用领域知识改进CT扫描静脉对比期的分类。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-01 DOI: 10.1016/j.compmedimag.2024.102458
Liangchen Liu , Jianfei Liu , Bikash Santra , Christopher Parnell , Pritam Mukherjee , Tejas Mathai , Yingying Zhu , Akshaya Anand , Ronald M. Summers
Multiple intravenous contrast phases of CT scans are commonly used in clinical practice to facilitate disease diagnosis. However, contrast phase information is commonly missing or incorrect due to discrepancies in CT series descriptions and imaging practices. This work aims to develop a classification algorithm to automatically determine the contrast phase of a CT scan. We hypothesize that image intensities of key organs (e.g. aorta, inferior vena cava) affected by contrast enhancement are inherent feature information to decide the contrast phase. These organs are segmented by TotalSegmentator followed by generating intensity features on each segmented organ region. Two internal and one external dataset were collected to validate the classification accuracy. In comparison with the baseline ResNet classification method that did not make use of key organs features, the proposed method achieved the comparable accuracy of 92.5% and F1 score of 92.5% in one internal dataset. The accuracy was improved from 63.9% to 79.8% and F1 score from 43.9% to 65.0% using the proposed method on the other internal dataset. The accuracy improved from 63.5% to 85.1% and the F1 score from 56.4% to 83.9% on the external dataset. Image intensity features from key organs are critical for improving the classification accuracy of contrast phases of CT scans. The classification method based on these features is robust to different scanners and imaging protocols from different institutes. Our results suggested improved classification accuracy over existing approaches, which advances the application of automatic contrast phase classification toward real clinical practice. The code for this work can be found here: (https://github.com/rsummers11/CT_Contrast_Phase_Classifier).
在临床实践中,多次静脉CT造影剂扫描常用于疾病诊断。然而,由于CT系列描述和成像实践的差异,对比相位信息通常缺失或不正确。本工作旨在开发一种分类算法来自动确定CT扫描的对比相位。我们假设受增强影响的关键器官(如主动脉、下腔静脉)的图像强度是决定对比相位的固有特征信息。这些器官被TotalSegmentator分割,然后在每个被分割的器官区域上生成强度特征。收集了两个内部数据集和一个外部数据集来验证分类精度。与未使用关键器官特征的基线ResNet分类方法相比,该方法在一个内部数据集中的准确率为92.5%,F1评分为92.5%。在另一个内部数据集上使用该方法,准确率从63.9%提高到79.8%,F1分数从43.9%提高到65.0%。在外部数据集上,准确率从63.5%提高到85.1%,F1得分从56.4%提高到83.9%。关键器官的图像强度特征对于提高CT扫描对比相位的分类精度至关重要。基于这些特征的分类方法对不同的扫描仪和不同研究所的成像协议具有鲁棒性。我们的研究结果表明,与现有的方法相比,自动对比期分类的准确率有所提高,从而推动了自动对比期分类在实际临床中的应用。这项工作的代码可以在这里找到:(https://github.com/rsummers11/CT_Contrast_Phase_Classifier)。
{"title":"Utilizing domain knowledge to improve the classification of intravenous contrast phase of CT scans","authors":"Liangchen Liu ,&nbsp;Jianfei Liu ,&nbsp;Bikash Santra ,&nbsp;Christopher Parnell ,&nbsp;Pritam Mukherjee ,&nbsp;Tejas Mathai ,&nbsp;Yingying Zhu ,&nbsp;Akshaya Anand ,&nbsp;Ronald M. Summers","doi":"10.1016/j.compmedimag.2024.102458","DOIUrl":"10.1016/j.compmedimag.2024.102458","url":null,"abstract":"<div><div>Multiple intravenous contrast phases of CT scans are commonly used in clinical practice to facilitate disease diagnosis. However, contrast phase information is commonly missing or incorrect due to discrepancies in CT series descriptions and imaging practices. This work aims to develop a classification algorithm to automatically determine the contrast phase of a CT scan. We hypothesize that image intensities of key organs (e.g. aorta, inferior vena cava) affected by contrast enhancement are inherent feature information to decide the contrast phase. These organs are segmented by TotalSegmentator followed by generating intensity features on each segmented organ region. Two internal and one external dataset were collected to validate the classification accuracy. In comparison with the baseline ResNet classification method that did not make use of key organs features, the proposed method achieved the comparable accuracy of 92.5% and F1 score of 92.5% in one internal dataset. The accuracy was improved from 63.9% to 79.8% and F1 score from 43.9% to 65.0% using the proposed method on the other internal dataset. The accuracy improved from 63.5% to 85.1% and the F1 score from 56.4% to 83.9% on the external dataset. Image intensity features from key organs are critical for improving the classification accuracy of contrast phases of CT scans. The classification method based on these features is robust to different scanners and imaging protocols from different institutes. Our results suggested improved classification accuracy over existing approaches, which advances the application of automatic contrast phase classification toward real clinical practice. The code for this work can be found here: (<span><span>https://github.com/rsummers11/CT_Contrast_Phase_Classifier</span><svg><path></path></svg></span>).</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"119 ","pages":"Article 102458"},"PeriodicalIF":5.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142911012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Post-hoc out-of-distribution detection for cardiac MRI segmentation 心脏MRI分割的事后非分布检测。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-01 DOI: 10.1016/j.compmedimag.2024.102476
Tewodros Weldebirhan Arega , Stéphanie Bricq , Fabrice Meriaudeau
In real-world scenarios, medical image segmentation models encounter input images that may deviate from the training images in various ways. These differences can arise from changes in image scanners and acquisition protocols, or even the images can come from a different modality or domain. When the model encounters these out-of-distribution (OOD) images, it can behave unpredictably. Therefore, it is important to develop a system that handles such out-of-distribution images to ensure the safe usage of the models in clinical practice. In this paper, we propose a post-hoc out-of-distribution (OOD) detection method that can be used with any pre-trained segmentation model. Our method utilizes multi-scale representations extracted from the encoder blocks of the segmentation model and employs Mahalanobis distance as a metric to measure the similarity between the input image and the in-distribution images. The segmentation model is pre-trained on a publicly available cardiac short-axis cine MRI dataset. The detection performance of the proposed method is evaluated on 13 different OOD datasets, which can be categorized as near, mild, and far OOD datasets based on their similarity to the in-distribution dataset. The results show that our method outperforms state-of-the-art feature space-based and uncertainty-based OOD detection methods across the various OOD datasets. Our method successfully detects near, mild, and far OOD images with high detection accuracy, showcasing the advantage of using the multi-scale and semantically rich representations of the encoder. In addition to the feature-based approach, we also propose a Dice coefficient-based OOD detection method, which demonstrates superior performance for adversarial OOD detection and shows a high correlation with segmentation quality. For the uncertainty-based method, despite having a strong correlation with the quality of the segmentation results in the near OOD datasets, they failed to detect mild and far OOD images, indicating the weakness of these methods when the images are more dissimilar. Future work will explore combining Mahalanobis distance and uncertainty scores for improved detection of challenging OOD images that are difficult to segment.
在现实场景中,医学图像分割模型会遇到输入图像可能以各种方式偏离训练图像的情况。这些差异可能来自图像扫描仪和采集协议的变化,甚至图像可能来自不同的模态或域。当模型遇到这些分布外(OOD)图像时,它的行为可能不可预测。因此,开发一种系统来处理这种分布外的图像,以确保模型在临床实践中的安全使用是很重要的。在本文中,我们提出了一种可用于任何预训练分割模型的post-hoc out- distribution (OOD)检测方法。我们的方法利用从分割模型的编码器块中提取的多尺度表示,并使用马氏距离作为度量输入图像与分布中图像之间的相似性的度量。分割模型在公开可用的心脏短轴电影MRI数据集上进行预训练。在13个不同的OOD数据集上评估了该方法的检测性能,这些数据集可以根据其与分布内数据集的相似性分为近、轻度和远OOD数据集。结果表明,我们的方法在各种OOD数据集上优于最先进的基于特征空间和基于不确定性的OOD检测方法。我们的方法以较高的检测精度成功地检测了近、轻度和远OOD图像,展示了使用编码器的多尺度和语义丰富表示的优势。除了基于特征的方法外,我们还提出了一种基于Dice系数的OOD检测方法,该方法在对抗性OOD检测中表现出优越的性能,并且与分割质量具有很高的相关性。对于基于不确定性的方法,尽管与近OOD数据集的分割结果质量有很强的相关性,但它们无法检测到轻度和远OOD图像,这表明这些方法在图像差异较大时的弱点。未来的工作将探索结合马氏距离和不确定性评分,以改进难以分割的具有挑战性的OOD图像的检测。
{"title":"Post-hoc out-of-distribution detection for cardiac MRI segmentation","authors":"Tewodros Weldebirhan Arega ,&nbsp;Stéphanie Bricq ,&nbsp;Fabrice Meriaudeau","doi":"10.1016/j.compmedimag.2024.102476","DOIUrl":"10.1016/j.compmedimag.2024.102476","url":null,"abstract":"<div><div>In real-world scenarios, medical image segmentation models encounter input images that may deviate from the training images in various ways. These differences can arise from changes in image scanners and acquisition protocols, or even the images can come from a different modality or domain. When the model encounters these out-of-distribution (OOD) images, it can behave unpredictably. Therefore, it is important to develop a system that handles such out-of-distribution images to ensure the safe usage of the models in clinical practice. In this paper, we propose a post-hoc out-of-distribution (OOD) detection method that can be used with any pre-trained segmentation model. Our method utilizes multi-scale representations extracted from the encoder blocks of the segmentation model and employs Mahalanobis distance as a metric to measure the similarity between the input image and the in-distribution images. The segmentation model is pre-trained on a publicly available cardiac short-axis cine MRI dataset. The detection performance of the proposed method is evaluated on 13 different OOD datasets, which can be categorized as near, mild, and far OOD datasets based on their similarity to the in-distribution dataset. The results show that our method outperforms state-of-the-art feature space-based and uncertainty-based OOD detection methods across the various OOD datasets. Our method successfully detects near, mild, and far OOD images with high detection accuracy, showcasing the advantage of using the multi-scale and semantically rich representations of the encoder. In addition to the feature-based approach, we also propose a Dice coefficient-based OOD detection method, which demonstrates superior performance for adversarial OOD detection and shows a high correlation with segmentation quality. For the uncertainty-based method, despite having a strong correlation with the quality of the segmentation results in the near OOD datasets, they failed to detect mild and far OOD images, indicating the weakness of these methods when the images are more dissimilar. Future work will explore combining Mahalanobis distance and uncertainty scores for improved detection of challenging OOD images that are difficult to segment.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"119 ","pages":"Article 102476"},"PeriodicalIF":5.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142866125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive fusion of dual-view for grading prostate cancer 双影像自适应融合在前列腺癌分级中的应用。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-01 DOI: 10.1016/j.compmedimag.2024.102479
Yaolin He , Bowen Li , Ruimin He , Guangming Fu , Dan Sun , Dongyong Shan , Zijian Zhang
Accurate preoperative grading of prostate cancer is crucial for assisted diagnosis. Multi-parametric magnetic resonance imaging (MRI) is a commonly used non-invasive approach, however, the interpretation of MRI images is still subject to significant subjectivity due to variations in physicians’ expertise and experience. To achieve accurate, non-invasive, and efficient grading of prostate cancer, this paper proposes a deep learning method that adaptively fuses dual-view MRI images. Specifically, a dual-view adaptive fusion model is designed. The model employs encoders to extract embedded features from two MRI sequences: T2-weighted imaging (T2WI) and apparent diffusion coefficient (ADC). The model reconstructs the original input images using the embedded features and adopts a cross-embedding fusion module to adaptively fuse the embedded features from the two views. Adaptive fusion refers to dynamically adjusting the fusion weights of the features from the two views according to different input samples, thereby fully utilizing complementary information. Furthermore, the model adaptively weights the prediction results from the two views based on uncertainty estimation, further enhancing the grading performance. To verify the importance of effective multi-view fusion for prostate cancer grading, extensive experiments are designed. The experiments evaluate the performance of single-view models, dual-view models, and state-of-the-art multi-view fusion algorithms. The results demonstrate that the proposed dual-view adaptive fusion method achieves the best grading performance, confirming its effectiveness for assisted grading diagnosis of prostate cancer. This study provides a novel deep learning solution for preoperative grading of prostate cancer, which has the potential to assist clinical physicians in making more accurate diagnostic decisions and has significant clinical application value.
术前准确的前列腺癌分级是辅助诊断的关键。多参数磁共振成像(MRI)是一种常用的非侵入性方法,然而,由于医生的专业知识和经验的差异,MRI图像的解释仍然受到显著的主观性的影响。为了实现准确、无创、高效的前列腺癌分级,本文提出了一种自适应融合双视图MRI图像的深度学习方法。具体来说,设计了一种双视图自适应融合模型。该模型采用编码器从两个MRI序列中提取嵌入特征:t2加权成像(T2WI)和表观扩散系数(ADC)。该模型利用嵌入特征重构原始输入图像,并采用交叉嵌入融合模块自适应融合两视图的嵌入特征。自适应融合是指根据不同的输入样本动态调整两个视图特征的融合权值,从而充分利用互补信息。在不确定性估计的基础上,对两种观点的预测结果进行自适应加权,进一步提高了分级性能。为了验证有效的多视点融合对前列腺癌分级的重要性,我们设计了大量的实验。实验评估了单视图模型、双视图模型和最先进的多视图融合算法的性能。结果表明,所提出的双视图自适应融合方法获得了最佳的分级性能,证实了其在前列腺癌辅助分级诊断中的有效性。本研究为前列腺癌术前分级提供了一种新颖的深度学习解决方案,有可能帮助临床医生做出更准确的诊断决策,具有重要的临床应用价值。
{"title":"Adaptive fusion of dual-view for grading prostate cancer","authors":"Yaolin He ,&nbsp;Bowen Li ,&nbsp;Ruimin He ,&nbsp;Guangming Fu ,&nbsp;Dan Sun ,&nbsp;Dongyong Shan ,&nbsp;Zijian Zhang","doi":"10.1016/j.compmedimag.2024.102479","DOIUrl":"10.1016/j.compmedimag.2024.102479","url":null,"abstract":"<div><div>Accurate preoperative grading of prostate cancer is crucial for assisted diagnosis. Multi-parametric magnetic resonance imaging (MRI) is a commonly used non-invasive approach, however, the interpretation of MRI images is still subject to significant subjectivity due to variations in physicians’ expertise and experience. To achieve accurate, non-invasive, and efficient grading of prostate cancer, this paper proposes a deep learning method that adaptively fuses dual-view MRI images. Specifically, a dual-view adaptive fusion model is designed. The model employs encoders to extract embedded features from two MRI sequences: T2-weighted imaging (T2WI) and apparent diffusion coefficient (ADC). The model reconstructs the original input images using the embedded features and adopts a cross-embedding fusion module to adaptively fuse the embedded features from the two views. Adaptive fusion refers to dynamically adjusting the fusion weights of the features from the two views according to different input samples, thereby fully utilizing complementary information. Furthermore, the model adaptively weights the prediction results from the two views based on uncertainty estimation, further enhancing the grading performance. To verify the importance of effective multi-view fusion for prostate cancer grading, extensive experiments are designed. The experiments evaluate the performance of single-view models, dual-view models, and state-of-the-art multi-view fusion algorithms. The results demonstrate that the proposed dual-view adaptive fusion method achieves the best grading performance, confirming its effectiveness for assisted grading diagnosis of prostate cancer. This study provides a novel deep learning solution for preoperative grading of prostate cancer, which has the potential to assist clinical physicians in making more accurate diagnostic decisions and has significant clinical application value.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"119 ","pages":"Article 102479"},"PeriodicalIF":5.4,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142873356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Head pose-assisted localization of facial landmarks for enhanced fast registration in skull base surgery 在颅底手术中,头部姿势辅助定位面部标志增强快速定位。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-12-30 DOI: 10.1016/j.compmedimag.2024.102483
Yifei Yang , Jingfan Fan , Tianyu Fu , Deqiang Xiao , Dongsheng Ma , Hong Song , Zhengkai Feng , Youping Liu , Jian Yang
In skull base surgery, the method of using a probe to draw or 3D scanners to acquire intraoperative facial point clouds for spatial registration presents several issues. Manual manipulation results in inefficiency and poor consistency. Traditional registration algorithms based on point clouds are highly dependent on the initial pose. The complexity of registration algorithms can also extend the required time. To address these issues, we used an RGB-D camera to capture real-time facial point clouds during surgery. The initial registration of the 3D model reconstructed from preoperative CT/MR images and the point cloud collected during surgery is accomplished through corresponding facial landmarks. The facial point clouds collected intraoperatively often contain rotations caused by the free-angle camera. Benefit from the close spatial geometric relationship between head pose and facial landmarks coordinates, we propose a facial landmarks localization network assisted by estimating head pose. The shared representation head pose estimation module boosts network performance by enhancing its perception of global facial features. The proposed network facilitates the localization of landmark points in both preoperative and intraoperative point clouds, enabling rapid automatic registration. A free-view human facial landmarks dataset called 3D-FVL was synthesized from clinical CT images for training. The proposed network achieves leading localization accuracy and robustness on two public datasets and the 3D-FVL. In clinical experiments, using the Artec Eva scanner, the trained network achieved a concurrent reduction in average registration time to 0.28 s, with an average registration error of 2.33 mm. The proposed method significantly reduced registration time, while meeting clinical accuracy requirements for surgical navigation. Our research will help to improving the efficiency and quality of skull base surgery.
在颅底手术中,使用探针绘制或3D扫描仪获取术中面部点云进行空间配准的方法存在几个问题。手工操作导致效率低下和一致性差。传统的基于点云的配准算法高度依赖于初始姿态。配准算法的复杂性也会延长所需的时间。为了解决这些问题,我们使用RGB-D相机在手术过程中实时捕捉面部点云。术前CT/MR图像重建的三维模型与术中采集的点云通过相应的面部地标完成初始配准。术中采集的面部点云通常包含由自由角度相机引起的旋转。利用头部姿态与面部地标坐标之间密切的空间几何关系,提出了一种基于头部姿态估计的面部地标定位网络。共享表示头姿估计模块通过增强其对全局面部特征的感知来提高网络性能。所提出的网络有助于在术前和术中点云中定位地标点,实现快速自动配准。从临床CT图像中合成了一个名为3D-FVL的自由视图人脸标志数据集,用于训练。该网络在两个公共数据集和3D-FVL上实现了领先的定位精度和鲁棒性。在临床实验中,使用Artec Eva扫描仪,训练后的神经网络将平均配准时间减少到0.28 s,平均配准误差为2.33 mm。该方法在满足临床手术导航精度要求的同时,显著减少了挂号时间。我们的研究将有助于提高颅底手术的效率和质量。
{"title":"Head pose-assisted localization of facial landmarks for enhanced fast registration in skull base surgery","authors":"Yifei Yang ,&nbsp;Jingfan Fan ,&nbsp;Tianyu Fu ,&nbsp;Deqiang Xiao ,&nbsp;Dongsheng Ma ,&nbsp;Hong Song ,&nbsp;Zhengkai Feng ,&nbsp;Youping Liu ,&nbsp;Jian Yang","doi":"10.1016/j.compmedimag.2024.102483","DOIUrl":"10.1016/j.compmedimag.2024.102483","url":null,"abstract":"<div><div>In skull base surgery, the method of using a probe to draw or 3D scanners to acquire intraoperative facial point clouds for spatial registration presents several issues. Manual manipulation results in inefficiency and poor consistency. Traditional registration algorithms based on point clouds are highly dependent on the initial pose. The complexity of registration algorithms can also extend the required time. To address these issues, we used an RGB-D camera to capture real-time facial point clouds during surgery. The initial registration of the 3D model reconstructed from preoperative CT/MR images and the point cloud collected during surgery is accomplished through corresponding facial landmarks. The facial point clouds collected intraoperatively often contain rotations caused by the free-angle camera. Benefit from the close spatial geometric relationship between head pose and facial landmarks coordinates, we propose a facial landmarks localization network assisted by estimating head pose. The shared representation head pose estimation module boosts network performance by enhancing its perception of global facial features. The proposed network facilitates the localization of landmark points in both preoperative and intraoperative point clouds, enabling rapid automatic registration. A free-view human facial landmarks dataset called 3D-FVL was synthesized from clinical CT images for training. The proposed network achieves leading localization accuracy and robustness on two public datasets and the 3D-FVL. In clinical experiments, using the Artec Eva scanner, the trained network achieved a concurrent reduction in average registration time to 0.28 s, with an average registration error of 2.33 mm. The proposed method significantly reduced registration time, while meeting clinical accuracy requirements for surgical navigation. Our research will help to improving the efficiency and quality of skull base surgery.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"120 ","pages":"Article 102483"},"PeriodicalIF":5.4,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142958412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attention incorporated network for sharing low-rank, image and k-space information during MR image reconstruction to achieve single breath-hold cardiac Cine imaging 注意在MR图像重建过程中引入网络共享低秩、图像和k空间信息,实现单次屏气心脏电影成像。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-12-28 DOI: 10.1016/j.compmedimag.2024.102475
Siying Xu , Kerstin Hammernik , Andreas Lingg , Jens Kübler , Patrick Krumm , Daniel Rueckert , Sergios Gatidis , Thomas Küstner
Cardiac Cine Magnetic Resonance Imaging (MRI) provides an accurate assessment of heart morphology and function in clinical practice. However, MRI requires long acquisition times, with recent deep learning-based methods showing great promise to accelerate imaging and enhance reconstruction quality. Existing networks exhibit some common limitations that constrain further acceleration possibilities, including single-domain learning, reliance on a single regularization term, and equal feature contribution. To address these limitations, we propose to embed information from multiple domains, including low-rank, image, and k-space, in a novel deep learning network for MRI reconstruction, which we denote as A-LIKNet. A-LIKNet adopts a parallel-branch structure, enabling independent learning in the k-space and image domain. Coupled information sharing layers realize the information exchange between domains. Furthermore, we introduce attention mechanisms into the network to assign greater weights to more critical coils or important temporal frames. Training and testing were conducted on an in-house dataset, including 91 cardiovascular patients and 38 healthy subjects scanned with 2D cardiac Cine using retrospective undersampling. Additionally, we evaluated A-LIKNet on the real-time prospectively undersampled data from the OCMR dataset. The results demonstrate that our proposed A-LIKNet outperforms existing methods and provides high-quality reconstructions. The network can effectively reconstruct highly retrospectively undersampled dynamic MR images up to 24× accelerations, indicating its potential for single breath-hold imaging.
心脏电影磁共振成像(MRI)在临床实践中提供了心脏形态和功能的准确评估。然而,MRI需要较长的采集时间,最近基于深度学习的方法显示出加速成像和提高重建质量的巨大希望。现有的网络表现出一些共同的限制,这些限制了进一步加速的可能性,包括单域学习、依赖单个正则化项和相等的特征贡献。为了解决这些限制,我们建议将来自多个领域的信息(包括低秩、图像和k空间)嵌入到一个用于MRI重建的新型深度学习网络中,我们将其称为a - liknet。a - liknet采用并行分支结构,可以在k空间和图像域进行独立学习。耦合信息共享层实现了域间的信息交换。此外,我们在网络中引入了注意机制,为更关键的线圈或重要的时间框架分配更大的权重。训练和测试是在一个内部数据集上进行的,该数据集包括91名心血管患者和38名健康受试者,使用回顾性欠采样的2D心脏电影扫描。此外,我们在OCMR数据集中的实时前瞻性欠采样数据上评估了A-LIKNet。结果表明,我们提出的A-LIKNet优于现有的方法,并提供了高质量的重建。该网络可以有效地重建高度回顾性采样不足的动态MR图像,高达24倍的加速度,这表明它具有单次屏气成像的潜力。
{"title":"Attention incorporated network for sharing low-rank, image and k-space information during MR image reconstruction to achieve single breath-hold cardiac Cine imaging","authors":"Siying Xu ,&nbsp;Kerstin Hammernik ,&nbsp;Andreas Lingg ,&nbsp;Jens Kübler ,&nbsp;Patrick Krumm ,&nbsp;Daniel Rueckert ,&nbsp;Sergios Gatidis ,&nbsp;Thomas Küstner","doi":"10.1016/j.compmedimag.2024.102475","DOIUrl":"10.1016/j.compmedimag.2024.102475","url":null,"abstract":"<div><div>Cardiac Cine Magnetic Resonance Imaging (MRI) provides an accurate assessment of heart morphology and function in clinical practice. However, MRI requires long acquisition times, with recent deep learning-based methods showing great promise to accelerate imaging and enhance reconstruction quality. Existing networks exhibit some common limitations that constrain further acceleration possibilities, including single-domain learning, reliance on a single regularization term, and equal feature contribution. To address these limitations, we propose to embed information from multiple domains, including low-rank, image, and k-space, in a novel deep learning network for MRI reconstruction, which we denote as A-LIKNet. A-LIKNet adopts a parallel-branch structure, enabling independent learning in the k-space and image domain. Coupled information sharing layers realize the information exchange between domains. Furthermore, we introduce attention mechanisms into the network to assign greater weights to more critical coils or important temporal frames. Training and testing were conducted on an in-house dataset, including 91 cardiovascular patients and 38 healthy subjects scanned with 2D cardiac Cine using retrospective undersampling. Additionally, we evaluated A-LIKNet on the real-time prospectively undersampled data from the OCMR dataset. The results demonstrate that our proposed A-LIKNet outperforms existing methods and provides high-quality reconstructions. The network can effectively reconstruct highly retrospectively undersampled dynamic MR images up to <span><math><mrow><mn>24</mn><mo>×</mo></mrow></math></span> accelerations, indicating its potential for single breath-hold imaging.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"120 ","pages":"Article 102475"},"PeriodicalIF":5.4,"publicationDate":"2024-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142985476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computerized Medical Imaging and Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1