首页 > 最新文献

IEEE transactions on medical imaging最新文献

英文 中文
Frenet–Serret Frame-Based Decomposition for Part Segmentation of 3-D Curvilinear Structures 基于Frenet-Serret框架的三维曲线结构零件分割。
Pub Date : 2025-07-16 DOI: 10.1109/TMI.2025.3589543
Shixuan Leslie Gu;Jason Ken Adhinarta;Mikhail Bessmeltsev;Jiancheng Yang;Yongjie Jessica Zhang;Wenjie Yin;Daniel Berger;Jeff W. Lichtman;Hanspeter Pfister;Donglai Wei
Accurate segmentation of anatomical substructures within 3D curvilinear structures in medical imaging remains challenging due to their complex geometry and the scarcity of diverse, large-scale datasets for algorithm development and evaluation. In this paper, we use dendritic spine segmentation as a case study and address these challenges by introducing a novel Frenet-Serret Frame-based Decomposition, which decomposes 3D curvilinear structures into a globally smooth continuous curve that captures the overall shape, and a cylindrical primitive that encodes local geometric properties. This approach leverages Frenet-Serret Frames and arc length parameterization to preserve essential geometric features while reducing representational complexity, facilitating data-efficient learning, improved segmentation accuracy, and generalization on 3D curvilinear structures. To rigorously evaluate our method, we introduce two datasets: CurviSeg, a synthetic dataset for 3D curvilinear structure segmentation that validates our method’s key properties, and DenSpineEM, a benchmark for dendritic spine segmentation, which comprises 4,476 manually annotated spines from 70 dendrites across three public electron microscopy datasets, covering multiple brain regions and species. Our experiments on DenSpineEM demonstrate exceptional cross-region and cross-species generalization: models trained on the mouse somatosensory cortex subset achieve 94.43% Dice, maintaining strong performance in zero-shot segmentation on both mouse visual cortex (95.61% Dice) and human frontal lobe (86.63% Dice) subsets. Moreover, we test the generalizability of our method on the IntrA dataset, where it achieves 77.08% Dice (5.29% higher than prior arts) on intracranial aneurysm segmentation from entire artery models. These findings demonstrate the potential of our approach for accurately analyzing complex curvilinear structures across diverse medical imaging fields. Our dataset, code, and models are available at https://github.com/VCG/FFD4DenSpineEM to support future research.
由于其复杂的几何形状以及用于算法开发和评估的各种大规模数据集的稀缺,医学成像中三维曲线结构中解剖子结构的准确分割仍然具有挑战性。在本文中,我们以树突脊柱分割为例进行研究,并通过引入一种新颖的基于Frenet-Serret帧的分解来解决这些挑战,该分解方法将3D曲线结构分解为捕获整体形状的全局光滑连续曲线和编码局部几何属性的圆柱形原语。该方法利用Frenet-Serret框架和弧长参数化来保留基本的几何特征,同时降低表征复杂性,促进数据高效学习,提高分割精度,并对3D曲线结构进行泛化。为了严格评估我们的方法,我们引入了两个数据集:CurviSeg,一个用于3D曲线结构分割的合成数据集,验证了我们的方法的关键属性;DenSpineEM,一个树突脊柱分割的基准数据集,包括来自三个公共电子显微镜数据集的70个树突的4,476个手动注释的脊柱,涵盖多个大脑区域和物种。我们在DenSpineEM上的实验证明了卓越的跨区域和跨物种泛化:在小鼠体感皮层子集上训练的模型达到了94.43%的Dice,在小鼠视觉皮层(95.61% Dice)和人类额叶(86.63% Dice)子集上保持了良好的零射击分割性能。此外,我们在IntrA数据集上测试了我们的方法的泛化性,从整个动脉模型中分割颅内动脉瘤的准确率达到77.08%(比现有技术高5.29%)。这些发现证明了我们的方法在不同医学成像领域准确分析复杂曲线结构的潜力。我们的数据集、代码和模型可在https://github.com/VCG/FFD4DenSpineEM上获得,以支持未来的研究。
{"title":"Frenet–Serret Frame-Based Decomposition for Part Segmentation of 3-D Curvilinear Structures","authors":"Shixuan Leslie Gu;Jason Ken Adhinarta;Mikhail Bessmeltsev;Jiancheng Yang;Yongjie Jessica Zhang;Wenjie Yin;Daniel Berger;Jeff W. Lichtman;Hanspeter Pfister;Donglai Wei","doi":"10.1109/TMI.2025.3589543","DOIUrl":"10.1109/TMI.2025.3589543","url":null,"abstract":"Accurate segmentation of anatomical substructures within 3D curvilinear structures in medical imaging remains challenging due to their complex geometry and the scarcity of diverse, large-scale datasets for algorithm development and evaluation. In this paper, we use dendritic spine segmentation as a case study and address these challenges by introducing a novel Frenet-Serret Frame-based Decomposition, which decomposes 3D curvilinear structures into a globally smooth continuous curve that captures the overall shape, and a cylindrical primitive that encodes local geometric properties. This approach leverages Frenet-Serret Frames and arc length parameterization to preserve essential geometric features while reducing representational complexity, facilitating data-efficient learning, improved segmentation accuracy, and generalization on 3D curvilinear structures. To rigorously evaluate our method, we introduce two datasets: CurviSeg, a synthetic dataset for 3D curvilinear structure segmentation that validates our method’s key properties, and DenSpineEM, a benchmark for dendritic spine segmentation, which comprises 4,476 manually annotated spines from 70 dendrites across three public electron microscopy datasets, covering multiple brain regions and species. Our experiments on DenSpineEM demonstrate exceptional cross-region and cross-species generalization: models trained on the mouse somatosensory cortex subset achieve 94.43% Dice, maintaining strong performance in zero-shot segmentation on both mouse visual cortex (95.61% Dice) and human frontal lobe (86.63% Dice) subsets. Moreover, we test the generalizability of our method on the IntrA dataset, where it achieves 77.08% Dice (5.29% higher than prior arts) on intracranial aneurysm segmentation from entire artery models. These findings demonstrate the potential of our approach for accurately analyzing complex curvilinear structures across diverse medical imaging fields. Our dataset, code, and models are available at <uri>https://github.com/VCG/FFD4DenSpineEM</uri> to support future research.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 12","pages":"5319-5331"},"PeriodicalIF":0.0,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144645751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Debiasing Medical Knowledge for Prompting Universal Model in CT Image Segmentation 基于医学知识去偏的CT图像分割通用提示模型
Pub Date : 2025-07-15 DOI: 10.1109/TMI.2025.3589399
Boxiang Yun;Shitian Zhao;Qingli Li;Alex Kot;Yan Wang
With the assistance of large language models, which offer universal medical prior knowledge via text prompts, state-of-the-art Universal Models (UM) have demonstrated considerable potential in the field of medical image segmentation. Semantically detailed text prompts, on the one hand, indicate comprehensive knowledge; on the other hand, they bring biases that may not be applicable to specific cases involving heterogeneous organs or rare cancers. To this end, we propose a Debiased Universal Model (DUM) to consider instance-level context information and remove knowledge biases in text prompts from the causal perspective. We are the first to discover and mitigate the bias introduced by universal knowledge. Specifically, we propose to extract organ-level text prompts via language models and instance-level context prompts from the visual features of each image. We aim to highlight more on factual instance-level information and mitigate organ-level’s knowledge bias. This process can be derived and theoretically supported by a causal graph, and instantiated by designing a standard UM (SUM) and a biased UM. The debiased output is finally obtained by subtracting the likelihood distribution output by biased UM from that of the SUM. Experiments on three large-scale multi-center external datasets and MSD internal tumor datasets show that our method enhances the model’s generalization ability in handling diverse medical scenarios and reducing the potential biases, even with an improvement of 4.16% compared with popular universal model on the AbdomenAtlas dataset, showing the strong generalizability. The code is publicly available at https://github.com/DeepMed-Lab-ECNU/DUM
在通过文本提示提供通用医学先验知识的大型语言模型的帮助下,最先进的通用模型(UM)在医学图像分割领域显示出相当大的潜力。语义详实的文本提示,一方面表明知识全面;另一方面,它们带来的偏见可能不适用于涉及异质器官或罕见癌症的特定病例。为此,我们提出了一个Debiased Universal Model (DUM)来考虑实例级上下文信息,并从因果关系的角度消除文本提示中的知识偏差。我们是第一个发现并减轻普遍知识带来的偏见的人。具体来说,我们建议通过语言模型和实例级上下文提示从每个图像的视觉特征中提取器官级文本提示。我们的目标是更多地强调事实的实例级信息,减轻器官级的知识偏差。这一过程可以由因果图推导和理论上支持,并通过设计一个标准UM (SUM)和一个有偏UM来实例化。通过从SUM的似然分布输出中减去有偏UM的似然分布输出,最终得到无偏输出。在三个大规模多中心外部数据集和MSD内部肿瘤数据集上的实验表明,我们的方法增强了模型处理多种医疗场景的泛化能力,减少了潜在的偏差,甚至比目前流行的通用模型在腹大图数据集上的泛化能力提高了4.16%,显示出较强的泛化能力。该代码可在https://github.com/DeepMed-Lab-ECNU/DUM上公开获得
{"title":"Debiasing Medical Knowledge for Prompting Universal Model in CT Image Segmentation","authors":"Boxiang Yun;Shitian Zhao;Qingli Li;Alex Kot;Yan Wang","doi":"10.1109/TMI.2025.3589399","DOIUrl":"10.1109/TMI.2025.3589399","url":null,"abstract":"With the assistance of large language models, which offer universal medical prior knowledge via text prompts, state-of-the-art Universal Models (UM) have demonstrated considerable potential in the field of medical image segmentation. Semantically detailed text prompts, on the one hand, indicate comprehensive knowledge; on the other hand, they bring biases that may not be applicable to specific cases involving heterogeneous organs or rare cancers. To this end, we propose a Debiased Universal Model (DUM) to consider instance-level context information and remove knowledge biases in text prompts from the causal perspective. We are the first to discover and mitigate the bias introduced by universal knowledge. Specifically, we propose to extract organ-level text prompts via language models and instance-level context prompts from the visual features of each image. We aim to highlight more on factual instance-level information and mitigate organ-level’s knowledge bias. This process can be derived and theoretically supported by a causal graph, and instantiated by designing a standard UM (SUM) and a biased UM. The debiased output is finally obtained by subtracting the likelihood distribution output by biased UM from that of the SUM. Experiments on three large-scale multi-center external datasets and MSD internal tumor datasets show that our method enhances the model’s generalization ability in handling diverse medical scenarios and reducing the potential biases, even with an improvement of 4.16% compared with popular universal model on the AbdomenAtlas dataset, showing the strong generalizability. The code is publicly available at <uri>https://github.com/DeepMed-Lab-ECNU/DUM</uri>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 12","pages":"5142-5154"},"PeriodicalIF":0.0,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144639749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging Segment Anything Model for Source-Free Domain Adaptation via Dual Feature Guided Auto-Prompting 利用分段任意模型通过双特征引导自动提示进行无源域自适应
Pub Date : 2025-07-15 DOI: 10.1109/TMI.2025.3587733
Zheang Huai;Hui Tang;Yi Li;Zhuangzhuang Chen;Xiaomeng Li
Source-free domain adaptation (SFDA) for segmentation aims at adapting a model trained in the source domain to perform well in the target domain with only the source model and unlabeled target data. Inspired by the recent success of Segment Anything Model (SAM) which exhibits the generality of segmenting images of various modalities and in different domains given human-annotated prompts like bounding boxes or points, we for the first time explore the potentials of Segment Anything Model for SFDA via automatedly finding an accurate bounding box prompt. We find that the bounding boxes directly generated with existing SFDA approaches are defective due to the domain gap. To tackle this issue, we propose a novel Dual Feature Guided (DFG) auto-prompting approach to search for the box prompt. Specifically, the source model is first trained in a feature aggregation phase, which not only preliminarily adapts the source model to the target domain but also builds a feature distribution well-prepared for box prompt search. In the second phase, based on two feature distribution observations, we gradually expand the box prompt with the guidance of the target model feature and the SAM feature to handle the class-wise clustered target features and the class-wise dispersed target features, respectively. To remove the potentially enlarged false positive regions caused by the over-confident prediction of the target model, the refined pseudo-labels produced by SAM are further postprocessed based on connectivity analysis. Experiments on 3D and 2D datasets indicate that our approach yields superior performance compared to conventional methods. Code is available at https://github.com/xmed-lab/DFG.
无源域自适应(source -free domain adaptation, SFDA)分割的目的是使在源域中训练好的模型在只有源模型和未标记的目标数据的情况下在目标域中表现良好。受到最近成功的Segment Anything Model (SAM)的启发,我们首次通过自动找到准确的边界框提示来探索Segment Anything Model在SFDA中的潜力。SAM展示了在给定的人类注释提示(如边界框或点)下,对各种模式和不同领域的图像进行分割的通用性。我们发现用现有的SFDA方法直接生成的边界盒由于域间隙存在缺陷。为了解决这个问题,我们提出了一种新的双特征引导(DFG)自动提示方法来搜索框提示符。具体来说,首先在特征聚合阶段对源模型进行训练,不仅使源模型初步适应目标域,而且构建了一个为框提示搜索做好准备的特征分布。第二阶段,在两次特征分布观测的基础上,在目标模型特征和SAM特征的指导下,逐步扩展框提示,分别处理类明智的聚类目标特征和类明智的分散目标特征。为了去除由于对目标模型的过度自信预测而可能增大的假阳性区域,对由SAM生成的精细伪标签进行基于连通性分析的进一步后处理。在3D和2D数据集上的实验表明,与传统方法相比,我们的方法具有更好的性能。代码可从https://github.com/xmed-lab/DFG获得。
{"title":"Leveraging Segment Anything Model for Source-Free Domain Adaptation via Dual Feature Guided Auto-Prompting","authors":"Zheang Huai;Hui Tang;Yi Li;Zhuangzhuang Chen;Xiaomeng Li","doi":"10.1109/TMI.2025.3587733","DOIUrl":"10.1109/TMI.2025.3587733","url":null,"abstract":"Source-free domain adaptation (SFDA) for segmentation aims at adapting a model trained in the source domain to perform well in the target domain with only the source model and unlabeled target data. Inspired by the recent success of Segment Anything Model (SAM) which exhibits the generality of segmenting images of various modalities and in different domains given human-annotated prompts like bounding boxes or points, we for the first time explore the potentials of Segment Anything Model for SFDA via automatedly finding an accurate bounding box prompt. We find that the bounding boxes directly generated with existing SFDA approaches are defective due to the domain gap. To tackle this issue, we propose a novel Dual Feature Guided (DFG) auto-prompting approach to search for the box prompt. Specifically, the source model is first trained in a feature aggregation phase, which not only preliminarily adapts the source model to the target domain but also builds a feature distribution well-prepared for box prompt search. In the second phase, based on two feature distribution observations, we gradually expand the box prompt with the guidance of the target model feature and the SAM feature to handle the class-wise clustered target features and the class-wise dispersed target features, respectively. To remove the potentially enlarged false positive regions caused by the over-confident prediction of the target model, the refined pseudo-labels produced by SAM are further postprocessed based on connectivity analysis. Experiments on 3D and 2D datasets indicate that our approach yields superior performance compared to conventional methods. Code is available at <uri>https://github.com/xmed-lab/DFG</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 12","pages":"5077-5088"},"PeriodicalIF":0.0,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144639870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Polyp Detection and Diagnosis Through Compositional Prompt-Guided Diffusion Models 基于成分快速引导扩散模型的鲁棒息肉检测与诊断
Pub Date : 2025-07-15 DOI: 10.1109/TMI.2025.3589456
Jia Yu;Yan Zhu;Peiyao Fu;Tianyi Chen;Junbo Huang;Quanlin Li;Pinghong Zhou;Zhihua Wang;Fei Wu;Shuo Wang;Xian Yang
Colorectal cancer (CRC) is a significant global health concern, and early detection through screening plays a critical role in reducing mortality. While deep learning models have shown promise in improving polyp detection, classification, and segmentation, their generalization across diverse clinical environments, particularly with out-of-distribution (OOD) data, remains a challenge. Multi-center datasets like PolypGen have been developed to address these issues, but their collection is costly and time-consuming. Traditional data augmentation techniques provide limited variability, failing to capture the complexity of medical images. Diffusion models have emerged as a promising solution for generating synthetic polyp images, but the image generation process in current models mainly relies on segmentation masks as the condition, limiting their ability to capture the full clinical context. To overcome these limitations, we propose a Progressive Spectrum Diffusion Model (PSDM) that integrates diverse clinical annotations–such as segmentation masks, bounding boxes, and colonoscopy reports–by transforming them into compositional prompts. These prompts are organized into coarse and fine components, allowing the model to capture both broad spatial structures and fine details, generating clinically accurate synthetic images. By augmenting training data with PSDM-generated samples, our model significantly improves polyp detection, classification, and segmentation. For instance, on the PolypGen dataset, PSDM increases the F1 score by 2.12% and the mean average precision by 3.09%, demonstrating superior performance in OOD scenarios and enhanced generalization.
结直肠癌(CRC)是一个重大的全球健康问题,通过筛查早期发现在降低死亡率方面发挥着关键作用。虽然深度学习模型在改善息肉检测、分类和分割方面表现出了希望,但它们在不同临床环境中的泛化,特别是在分布外(OOD)数据方面,仍然是一个挑战。像polygen这样的多中心数据集已经开发出来解决这些问题,但它们的收集成本高且耗时。传统的数据增强技术提供有限的可变性,无法捕捉医学图像的复杂性。扩散模型已经成为生成合成息肉图像的一个很有前途的解决方案,但目前模型中的图像生成过程主要依赖于分割掩模作为条件,限制了它们捕捉完整临床环境的能力。为了克服这些限制,我们提出了一种渐进式光谱扩散模型(PSDM),该模型通过将不同的临床注释(如分割掩模、边界框和结肠镜检查报告)转换为组合提示来集成它们。这些提示被组织成粗糙和精细的组件,允许模型捕获广泛的空间结构和精细的细节,生成临床准确的合成图像。通过使用psdm生成的样本增强训练数据,我们的模型显著提高了息肉的检测、分类和分割。例如,在polygen数据集上,PSDM将F1得分提高了2.12%,平均精度提高了3.09%,在OOD场景中表现出优异的性能和增强的泛化能力。
{"title":"Robust Polyp Detection and Diagnosis Through Compositional Prompt-Guided Diffusion Models","authors":"Jia Yu;Yan Zhu;Peiyao Fu;Tianyi Chen;Junbo Huang;Quanlin Li;Pinghong Zhou;Zhihua Wang;Fei Wu;Shuo Wang;Xian Yang","doi":"10.1109/TMI.2025.3589456","DOIUrl":"10.1109/TMI.2025.3589456","url":null,"abstract":"Colorectal cancer (CRC) is a significant global health concern, and early detection through screening plays a critical role in reducing mortality. While deep learning models have shown promise in improving polyp detection, classification, and segmentation, their generalization across diverse clinical environments, particularly with out-of-distribution (OOD) data, remains a challenge. Multi-center datasets like PolypGen have been developed to address these issues, but their collection is costly and time-consuming. Traditional data augmentation techniques provide limited variability, failing to capture the complexity of medical images. Diffusion models have emerged as a promising solution for generating synthetic polyp images, but the image generation process in current models mainly relies on segmentation masks as the condition, limiting their ability to capture the full clinical context. To overcome these limitations, we propose a Progressive Spectrum Diffusion Model (PSDM) that integrates diverse clinical annotations–such as segmentation masks, bounding boxes, and colonoscopy reports–by transforming them into compositional prompts. These prompts are organized into coarse and fine components, allowing the model to capture both broad spatial structures and fine details, generating clinically accurate synthetic images. By augmenting training data with PSDM-generated samples, our model significantly improves polyp detection, classification, and segmentation. For instance, on the PolypGen dataset, PSDM increases the F1 score by 2.12% and the mean average precision by 3.09%, demonstrating superior performance in OOD scenarios and enhanced generalization.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 12","pages":"5245-5257"},"PeriodicalIF":0.0,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11080481","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144639753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attention-Based Shape-Deformation Networks for Artifact-Free Geometry Reconstruction of Lumbar Spine From MR Images 基于注意力的MR图像腰椎无伪影几何重建的形状变形网络
Pub Date : 2025-07-15 DOI: 10.1109/TMI.2025.3588831
Linchen Qian;Jiasong Chen;Linhai Ma;Timur Urakov;Weiyong Gu;Liang Liang
Lumbar disc degeneration, a progressive structural wear and tear of lumbar intervertebral disc, is regarded as an essential role on low back pain, a significant global health concern. Automated lumbar spine geometry reconstruction from MR images will enable fast measurement of medical parameters to evaluate the lumbar status, in order to determine a suitable treatment. Existing image segmentation-based techniques often generate erroneous segments or unstructured point clouds, unsuitable for medical parameter measurement. In this work, we present UNet-DeformSA and TransDeformer: novel attention-based deep neural networks that reconstruct the geometry of the lumbar spine with high spatial accuracy and mesh correspondence across patients, and we also present a variant of TransDeformer for error estimation. Specially, we devise new attention modules with a new attention formula, which integrate tokenized image features and tokenized shape features to predict the displacements of the points on a shape template. The deformed template reveals the lumbar spine geometry in an image. Experiment results show that our networks generate artifact-free geometry outputs, and the variant of TransDeformer can predict the errors of a reconstructed geometry. Our code is available at https://github.com/linchenq/TransDeformer-Mesh.
腰椎间盘退变是腰椎间盘的进行性结构磨损和撕裂,被认为是腰痛的重要原因,是一个重大的全球健康问题。从MR图像中自动重建腰椎几何结构将能够快速测量医学参数以评估腰椎状态,以便确定合适的治疗方法。现有的基于图像分割的技术经常产生错误的片段或非结构化的点云,不适合医学参数的测量。在这项工作中,我们提出了UNet-DeformSA和TransDeformer:新型的基于注意力的深度神经网络,以高空间精度和网格对应的方式重建腰椎的几何形状,我们还提出了TransDeformer的一种变体,用于误差估计。特别地,我们设计了新的关注模块和新的关注公式,将标记化图像特征和标记化形状特征相结合,预测形状模板上点的位移。变形的模板在图像中显示腰椎的几何形状。实验结果表明,我们的网络生成了无伪影的几何输出,并且transformformer变体可以预测重构几何的误差。我们的代码可在https://github.com/linchenq/TransDeformer-Mesh上获得。
{"title":"Attention-Based Shape-Deformation Networks for Artifact-Free Geometry Reconstruction of Lumbar Spine From MR Images","authors":"Linchen Qian;Jiasong Chen;Linhai Ma;Timur Urakov;Weiyong Gu;Liang Liang","doi":"10.1109/TMI.2025.3588831","DOIUrl":"10.1109/TMI.2025.3588831","url":null,"abstract":"Lumbar disc degeneration, a progressive structural wear and tear of lumbar intervertebral disc, is regarded as an essential role on low back pain, a significant global health concern. Automated lumbar spine geometry reconstruction from MR images will enable fast measurement of medical parameters to evaluate the lumbar status, in order to determine a suitable treatment. Existing image segmentation-based techniques often generate erroneous segments or unstructured point clouds, unsuitable for medical parameter measurement. In this work, we present UNet-DeformSA and TransDeformer: novel attention-based deep neural networks that reconstruct the geometry of the lumbar spine with high spatial accuracy and mesh correspondence across patients, and we also present a variant of TransDeformer for error estimation. Specially, we devise new attention modules with a new attention formula, which integrate tokenized image features and tokenized shape features to predict the displacements of the points on a shape template. The deformed template reveals the lumbar spine geometry in an image. Experiment results show that our networks generate artifact-free geometry outputs, and the variant of TransDeformer can predict the errors of a reconstructed geometry. Our code is available at <uri>https://github.com/linchenq/TransDeformer-Mesh</uri>.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 12","pages":"5258-5277"},"PeriodicalIF":0.0,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144639750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayesian Posterior Distribution Estimation of Kinetic Parameters in Dynamic Brain PET Using Generative Deep Learning Models 基于生成式深度学习模型的动态脑PET动力学参数贝叶斯后验分布估计
Pub Date : 2025-07-15 DOI: 10.1109/TMI.2025.3588859
Yanis Djebra;Xiaofeng Liu;Thibault Marin;Amal Tiss;Maeva Dhaynaut;Nicolas Guehl;Keith Johnson;Georges El Fakhri;Chao Ma;Jinsong Ouyang
Positron Emission Tomography (PET) is a valuable imaging method for studying molecular-level processes in the body, such as hyperphosphorylated tau (p-tau) protein aggregates, a hallmark of several neurodegenerative diseases including Alzheimer’s disease. P-tau density and cerebral perfusion can be quantified from dynamic PET images using tracer kinetic modeling techniques. However, noise in PET images leads to uncertainty in the estimated kinetic parameters, which can be quantified by estimating the posterior distribution of kinetic parameters using Bayesian inference (BI). Markov Chain Monte Carlo (MCMC) techniques are commonly used for posterior estimation but with significant computational needs. This work proposes an Improved Denoising Diffusion Probabilistic Model (iDDPM)-based method to estimate the posterior distribution of kinetic parameters in dynamic PET, leveraging the high computational efficiency of deep learning. The performance of the proposed method was evaluated on a [18F]MK6240 study and compared to a Conditional Variational Autoencoder with dual decoder (CVAE-DD)-based method and a Wasserstein GAN with gradient penalty (WGAN-GP)-based method. Posterior distributions inferred from Metropolis-Hasting MCMC were used as reference. Our approach consistently outperformed the CVAE-DD and WGAN-GP methods and offered significant reduction in computation time than the MCMC method (over 230 times faster), inferring accurate ( $lt {0}.{67},%$ mean error) and precise ( $lt {7}.{23},%$ standard deviation error) posterior distributions.
正电子发射断层扫描(PET)是研究体内分子水平过程的一种有价值的成像方法,例如过度磷酸化的tau (p-tau)蛋白聚集体,这是包括阿尔茨海默病在内的几种神经退行性疾病的标志。P-tau密度和脑灌注可以使用示踪动力学建模技术从动态PET图像中量化。然而,PET图像中的噪声导致了估计的动力学参数的不确定性,这可以通过使用贝叶斯推理(BI)估计动力学参数的后验分布来量化。马尔可夫链蒙特卡罗(MCMC)技术通常用于后验估计,但计算量很大。本文提出了一种基于改进的去噪扩散概率模型(iDDPM)的方法来估计动态PET中动力学参数的后验分布,利用深度学习的高计算效率。在一项[18F]MK6240研究中评估了该方法的性能,并将其与基于双解码器的条件变分自编码器(CVAE-DD)方法和基于梯度惩罚的Wasserstein GAN (WGAN-GP)方法进行了比较。以Metropolis-Hasting MCMC推断的后验分布为参考。我们的方法始终优于CVAE-DD和WGAN-GP方法,并且比MCMC方法显著减少了计算时间(超过230倍),推断准确($lt{0})。{67},%$平均误差)和精确($lt{7}。{23},%$标准差误差)后验分布。
{"title":"Bayesian Posterior Distribution Estimation of Kinetic Parameters in Dynamic Brain PET Using Generative Deep Learning Models","authors":"Yanis Djebra;Xiaofeng Liu;Thibault Marin;Amal Tiss;Maeva Dhaynaut;Nicolas Guehl;Keith Johnson;Georges El Fakhri;Chao Ma;Jinsong Ouyang","doi":"10.1109/TMI.2025.3588859","DOIUrl":"10.1109/TMI.2025.3588859","url":null,"abstract":"Positron Emission Tomography (PET) is a valuable imaging method for studying molecular-level processes in the body, such as hyperphosphorylated tau (p-tau) protein aggregates, a hallmark of several neurodegenerative diseases including Alzheimer’s disease. P-tau density and cerebral perfusion can be quantified from dynamic PET images using tracer kinetic modeling techniques. However, noise in PET images leads to uncertainty in the estimated kinetic parameters, which can be quantified by estimating the posterior distribution of kinetic parameters using Bayesian inference (BI). Markov Chain Monte Carlo (MCMC) techniques are commonly used for posterior estimation but with significant computational needs. This work proposes an Improved Denoising Diffusion Probabilistic Model (iDDPM)-based method to estimate the posterior distribution of kinetic parameters in dynamic PET, leveraging the high computational efficiency of deep learning. The performance of the proposed method was evaluated on a [18F]MK6240 study and compared to a Conditional Variational Autoencoder with dual decoder (CVAE-DD)-based method and a Wasserstein GAN with gradient penalty (WGAN-GP)-based method. Posterior distributions inferred from Metropolis-Hasting MCMC were used as reference. Our approach consistently outperformed the CVAE-DD and WGAN-GP methods and offered significant reduction in computation time than the MCMC method (over 230 times faster), inferring accurate (<inline-formula> <tex-math>$lt {0}.{67},%$ </tex-math></inline-formula> mean error) and precise (<inline-formula> <tex-math>$lt {7}.{23},%$ </tex-math></inline-formula> standard deviation error) posterior distributions.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 12","pages":"5089-5102"},"PeriodicalIF":0.0,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144639748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Region Uncertainty Estimation for Medical Image Segmentation With Noisy Labels 带噪声标签医学图像分割的区域不确定性估计
Pub Date : 2025-07-14 DOI: 10.1109/TMI.2025.3589058
Kai Han;Shuhui Wang;Jun Chen;Chengxuan Qian;Chongwen Lyu;Siqi Ma;Chengjian Qiu;Victor S. Sheng;Qingming Huang;Zhe Liu
The success of deep learning in 3D medical image segmentation hinges on training with a large dataset of fully annotated 3D volumes, which are difficult and time-consuming to acquire. Although recent foundation models (e.g., segment anything model, SAM) can utilize sparse annotations to reduce annotation costs, segmentation tasks involving organs and tissues with blurred boundaries remain challenging. To address this issue, we propose a region uncertainty estimation framework for Computed Tomography (CT) image segmentation using noisy labels. Specifically, we propose a sample-stratified training strategy that stratifies samples according to their varying quality labels, prioritizing confident and fine-grained information at each training stage. This sample-to-voxel level processing enables more reliable supervision information to propagate to noisy label data, thus effectively mitigating the impact of noisy annotations. Moreover, we further design a boundary-guided regional uncertainty estimation module that adapts sample hierarchical training to assist in evaluating sample confidence. Experiments conducted across multiple CT datasets demonstrate the superiority of our proposed method over several competitive approaches under various noise conditions. Our proposed reliable label propagation strategy not only significantly reduces the cost of medical image annotation and robust model training but also improves the segmentation performance in scenarios with imperfect annotations, thus paving the way towards the application of medical segmentation foundation models under low-resource and remote scenarios. Code will be available at https://github.com/KHan-UJS/NoisyLabel
深度学习在三维医学图像分割中的成功取决于使用一个完整注释的三维体的大型数据集进行训练,而这些数据集的获取困难且耗时。尽管最近的基础模型(例如,segment anything model, SAM)可以利用稀疏注释来降低注释成本,但是涉及到边界模糊的器官和组织的分割任务仍然具有挑战性。为了解决这个问题,我们提出了一个区域不确定性估计框架,用于使用噪声标签进行计算机断层扫描(CT)图像分割。具体来说,我们提出了一种样本分层训练策略,根据不同的质量标签对样本进行分层,在每个训练阶段优先考虑自信和细粒度的信息。这种样本到体素级的处理使得更可靠的监督信息能够传播到有噪声的标签数据中,从而有效地减轻了有噪声标注的影响。此外,我们进一步设计了一个边界引导的区域不确定性估计模块,该模块适应样本分层训练,以帮助评估样本置信度。在多个CT数据集上进行的实验表明,在各种噪声条件下,我们提出的方法优于几种竞争方法。我们提出的可靠标签传播策略不仅显著降低了医学图像标注和鲁棒性模型训练的成本,而且提高了标注不完善场景下的分割性能,从而为低资源和远程场景下医学分割基础模型的应用铺平了道路。代码将在https://github.com/KHan-UJS/NoisyLabel上提供
{"title":"Region Uncertainty Estimation for Medical Image Segmentation With Noisy Labels","authors":"Kai Han;Shuhui Wang;Jun Chen;Chengxuan Qian;Chongwen Lyu;Siqi Ma;Chengjian Qiu;Victor S. Sheng;Qingming Huang;Zhe Liu","doi":"10.1109/TMI.2025.3589058","DOIUrl":"10.1109/TMI.2025.3589058","url":null,"abstract":"The success of deep learning in 3D medical image segmentation hinges on training with a large dataset of fully annotated 3D volumes, which are difficult and time-consuming to acquire. Although recent foundation models (e.g., segment anything model, SAM) can utilize sparse annotations to reduce annotation costs, segmentation tasks involving organs and tissues with blurred boundaries remain challenging. To address this issue, we propose a region uncertainty estimation framework for Computed Tomography (CT) image segmentation using noisy labels. Specifically, we propose a sample-stratified training strategy that stratifies samples according to their varying quality labels, prioritizing confident and fine-grained information at each training stage. This sample-to-voxel level processing enables more reliable supervision information to propagate to noisy label data, thus effectively mitigating the impact of noisy annotations. Moreover, we further design a boundary-guided regional uncertainty estimation module that adapts sample hierarchical training to assist in evaluating sample confidence. Experiments conducted across multiple CT datasets demonstrate the superiority of our proposed method over several competitive approaches under various noise conditions. Our proposed reliable label propagation strategy not only significantly reduces the cost of medical image annotation and robust model training but also improves the segmentation performance in scenarios with imperfect annotations, thus paving the way towards the application of medical segmentation foundation models under low-resource and remote scenarios. Code will be available at <uri>https://github.com/KHan-UJS/NoisyLabel</uri>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 12","pages":"5197-5207"},"PeriodicalIF":0.0,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144629831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MDPNet: Multiscale Dynamic Polyp-Focus Network for Enhancing Medical Image Polyp Segmentation MDPNet:用于医学图像息肉分割的多尺度动态息肉焦点网络
Pub Date : 2025-07-14 DOI: 10.1109/TMI.2025.3588503
Alpha Alimamy Kamara;Shiwen He;Abdul Joseph Fofanah;Rong Xu;Yuehan Chen
Colorectal cancer (CRC) is the most common malignant neoplasm in the digestive system and a primary cause of cancer-related mortality in the United States, exceeded only by lung and prostate cancers. The American Cancer Society estimates that in 2024, there will be approximately 152,810 new cases of colorectal cancer and 53,010 deaths in the United States, highlighting the critical need for early diagnosis and prevention. Precise polyp segmentation is crucial for early detection, as it improves treatability and survival rates. However, existing methods, such as the UNet architecture, struggle to capture long-range dependencies and manage the variability in polyp shapes and sizes, and the low contrast between polyps and the surrounding background. We propose a multiscale dynamic polyp-focus network (MDPNet) to solve these problems. It has three modules: dynamic polyp-focus (DPfocus), non-local multiscale attention pooling (NMAP), and learnable multiscale attention pooling (LMAP). DPfocus captures global pixel-to-polyp dependencies, preserving high-level semantics and emphasizing polyp-specific regions. NMAP stabilizes the model under varying polyp shapes, sizes, and contrasts by dynamically aggregating multiscale features with minimal data loss. LMAP enhances spatial representation by learning multiscale attention across different regions. This enables MDPNet to understand long-range dependencies and combine information from different levels of context, boosting the segmentation accuracy. Extensive experiments on four publicly available datasets demonstrate that MDPNet is effective and outperforms current state-of-the-art segmentation methods by 2–5% in overall accuracy across all datasets. This demonstrates that our method improves polyp segmentation accuracy, aiding early detection and treatment of colorectal cancer.
结直肠癌(CRC)是消化系统中最常见的恶性肿瘤,也是美国癌症相关死亡率的主要原因,仅次于肺癌和前列腺癌。美国癌症协会估计,到2024年,美国将有大约152810例结直肠癌新病例和53010例死亡,这突出了早期诊断和预防的迫切需要。精确的息肉分割对于早期发现是至关重要的,因为它可以提高治愈率和存活率。然而,现有的方法,如UNet体系结构,难以捕获长期依赖关系,管理息肉形状和大小的可变性,以及息肉与周围背景之间的低对比度。我们提出了一种多尺度动态多焦点网络(MDPNet)来解决这些问题。它有三个模块:动态多焦点(DPfocus)、非局部多尺度注意池(NMAP)和可学习多尺度注意池(LMAP)。DPfocus捕获全局像素到息肉的依赖关系,保留高级语义并强调特定于息肉的区域。NMAP通过动态聚合多尺度特征,以最小的数据丢失来稳定不同息肉形状、大小和对比度下的模型。LMAP通过学习不同区域的多尺度注意力来增强空间表征。这使MDPNet能够理解远程依赖关系,并结合来自不同级别上下文的信息,从而提高分割的准确性。在四个公开可用的数据集上进行的大量实验表明,MDPNet是有效的,并且在所有数据集的总体精度上优于当前最先进的分割方法2-5%。这表明我们的方法提高了息肉分割的准确性,有助于早期发现和治疗结直肠癌。
{"title":"MDPNet: Multiscale Dynamic Polyp-Focus Network for Enhancing Medical Image Polyp Segmentation","authors":"Alpha Alimamy Kamara;Shiwen He;Abdul Joseph Fofanah;Rong Xu;Yuehan Chen","doi":"10.1109/TMI.2025.3588503","DOIUrl":"10.1109/TMI.2025.3588503","url":null,"abstract":"Colorectal cancer (CRC) is the most common malignant neoplasm in the digestive system and a primary cause of cancer-related mortality in the United States, exceeded only by lung and prostate cancers. The American Cancer Society estimates that in 2024, there will be approximately 152,810 new cases of colorectal cancer and 53,010 deaths in the United States, highlighting the critical need for early diagnosis and prevention. Precise polyp segmentation is crucial for early detection, as it improves treatability and survival rates. However, existing methods, such as the UNet architecture, struggle to capture long-range dependencies and manage the variability in polyp shapes and sizes, and the low contrast between polyps and the surrounding background. We propose a multiscale dynamic polyp-focus network (MDPNet) to solve these problems. It has three modules: dynamic polyp-focus (DPfocus), non-local multiscale attention pooling (NMAP), and learnable multiscale attention pooling (LMAP). DPfocus captures global pixel-to-polyp dependencies, preserving high-level semantics and emphasizing polyp-specific regions. NMAP stabilizes the model under varying polyp shapes, sizes, and contrasts by dynamically aggregating multiscale features with minimal data loss. LMAP enhances spatial representation by learning multiscale attention across different regions. This enables MDPNet to understand long-range dependencies and combine information from different levels of context, boosting the segmentation accuracy. Extensive experiments on four publicly available datasets demonstrate that MDPNet is effective and outperforms current state-of-the-art segmentation methods by 2–5% in overall accuracy across all datasets. This demonstrates that our method improves polyp segmentation accuracy, aiding early detection and treatment of colorectal cancer.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 12","pages":"5208-5220"},"PeriodicalIF":0.0,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144630145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Clinical Stage Prompt Induced Multi-Modal Prognosis 临床分期提示诱导多模式预后
Pub Date : 2025-07-14 DOI: 10.1109/TMI.2025.3588836
Ting Jin;Xingran Xie;Qingli Li;Xinxing Li;Yan Wang
Histology analysis of the tumor micro-environment integrated with genomic assays is widely regarded as the cornerstone for cancer analysis and survival prediction. This paper jointly incorporates genomics and Whole Slide Images (WSIs), and focuses on addressing the primary challenges involved in multi-modality prognosis analysis: 1) the high-order relevance is difficult to be modeled from dimensional imbalanced gigapixel WSIs and tens of thousands of genetic sequences, and 2) the lack of medical expertise and clinical knowledge hampers the effectiveness of prognosis-oriented multi-modal fusion. Due to the nature of the prognosis task, statistical priors and clinical knowledge are essential factors to provide the likelihood of survival over time, which, however, has been under-studied. To this end, we propose a prognosis-oriented image-omics fusion framework, dubbed Clinical Stage Prompt induced Multimodal Prognosis (CiMP). Concretely, we leverage the capabilities of the advanced LLM to generate descriptions derived from structured clinical records and utilize the generated clinical staging prompts to inquire critical prognosis-related information from each modality intentionally. In addition, we propose a Group Multi-Head Self-Attention module to capture structured group-specific features within cohorts of genomic data. Experimental results on five TCGA datasets show the superiority of our proposed method, achieving state-of-the-art performance compared to previous multi-modal prognostic models. Furthermore, the clinical interpretability and discussion also highlight the immense potential for further medical applications. Our code will be released at https://github.com/DeepMed-Lab-ECNU/CiMP/
结合基因组分析的肿瘤微环境组织学分析被广泛认为是癌症分析和生存预测的基石。本文将基因组学与全幻灯片图像(Whole Slide Images, wsi)技术相结合,重点解决多模态预后分析面临的主要挑战:1)难以从维度不平衡的千兆像素wsi和数以万计的基因序列中建立高阶相关性模型;2)缺乏医学专业知识和临床知识阻碍了面向预后的多模态融合的有效性。由于预后任务的性质,统计先验和临床知识是提供随时间推移的生存可能性的重要因素,然而,这一点尚未得到充分研究。为此,我们提出了一个面向预后的图像组学融合框架,称为临床阶段提示诱导多模式预后(CiMP)。具体而言,我们利用高级LLM的功能,从结构化的临床记录中生成描述,并利用生成的临床分期提示,有意地从每个模式中查询关键的预后相关信息。此外,我们提出了一个群体多头自关注模块,以捕获基因组数据队列中结构化的群体特定特征。在五个TCGA数据集上的实验结果显示了我们提出的方法的优越性,与以前的多模态预测模型相比,实现了最先进的性能。此外,临床可解释性和讨论也强调了进一步医学应用的巨大潜力。我们的代码将在https://github.com/DeepMed-Lab-ECNU/CiMP/上发布
{"title":"Clinical Stage Prompt Induced Multi-Modal Prognosis","authors":"Ting Jin;Xingran Xie;Qingli Li;Xinxing Li;Yan Wang","doi":"10.1109/TMI.2025.3588836","DOIUrl":"10.1109/TMI.2025.3588836","url":null,"abstract":"Histology analysis of the tumor micro-environment integrated with genomic assays is widely regarded as the cornerstone for cancer analysis and survival prediction. This paper jointly incorporates genomics and Whole Slide Images (WSIs), and focuses on addressing the primary challenges involved in multi-modality prognosis analysis: 1) the high-order relevance is difficult to be modeled from dimensional imbalanced gigapixel WSIs and tens of thousands of genetic sequences, and 2) the lack of medical expertise and clinical knowledge hampers the effectiveness of prognosis-oriented multi-modal fusion. Due to the nature of the prognosis task, statistical priors and clinical knowledge are essential factors to provide the likelihood of survival over time, which, however, has been under-studied. To this end, we propose a prognosis-oriented image-omics fusion framework, dubbed Clinical Stage Prompt induced Multimodal Prognosis (CiMP). Concretely, we leverage the capabilities of the advanced LLM to generate descriptions derived from structured clinical records and utilize the generated clinical staging prompts to inquire critical prognosis-related information from each modality intentionally. In addition, we propose a Group Multi-Head Self-Attention module to capture structured group-specific features within cohorts of genomic data. Experimental results on five TCGA datasets show the superiority of our proposed method, achieving state-of-the-art performance compared to previous multi-modal prognostic models. Furthermore, the clinical interpretability and discussion also highlight the immense potential for further medical applications. Our code will be released at <uri>https://github.com/DeepMed-Lab-ECNU/CiMP/</uri>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 12","pages":"5065-5076"},"PeriodicalIF":0.0,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144629830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Supervised Upsampling for Reconstructions With Generalized Enhancement in Photoacoustic Computed Tomography 光声计算机断层扫描广义增强重建的自监督上采样
Pub Date : 2025-07-14 DOI: 10.1109/TMI.2025.3588789
Kexin Deng;Yan Luo;Hongzhi Zuo;Yuwen Chen;Liujie Gu;Mingyuan Liu;Hengrong Lan;Jianwen Luo;Cheng Ma
Photoacoustic computed tomography (PACT) is an emerging hybrid imaging modality with potential applications in biomedicine. A major roadblock to the widespread adoption of PACT is the limited number of detectors, which gives rise to spatial aliasing and manifests as streak artifacts in the reconstructed image. A brute-force solution to the problem is to increase the number of detectors, which, however, is often undesirable due to escalated costs. In this study, we present a novel self-supervised learning approach, to overcome this long-standing challenge. We found that small blocks of PACT channel data show similarity at various downsampling rates. Based on this observation, a neural network trained on downsampled data can reliably perform accurate interpolation without requiring densely-sampled ground truth data, which is typically unavailable in real practice. Our method has undergone validation through numerical simulations, controlled phantom experiments, as well as ex vivo and in vivo animal tests, across multiple PACT systems. We have demonstrated that our technique provides an effective and cost-efficient solution to address the under-sampling issue in PACT, thereby enhancing the capabilities of this imaging technology.
光声计算机断层扫描(PACT)是一种新兴的混合成像方式,在生物医学领域具有潜在的应用前景。广泛采用PACT的主要障碍是探测器数量有限,这导致了空间混叠,并在重建图像中表现为条纹伪影。一种暴力解决方案是增加检测器的数量,然而,由于成本上升,这通常是不可取的。在这项研究中,我们提出了一种新的自监督学习方法,以克服这一长期存在的挑战。我们发现小块的PACT通道数据在不同的降采样率下显示出相似性。基于这种观察,在下采样数据上训练的神经网络可以可靠地执行精确的插值,而不需要密集采样的地面真值数据,这在实际应用中通常是不可用的。我们的方法已经通过多个PACT系统的数值模拟、受控模拟实验以及离体和体内动物实验进行了验证。我们已经证明,我们的技术为解决PACT中的采样不足问题提供了一种有效且经济的解决方案,从而提高了该成像技术的能力。
{"title":"Self-Supervised Upsampling for Reconstructions With Generalized Enhancement in Photoacoustic Computed Tomography","authors":"Kexin Deng;Yan Luo;Hongzhi Zuo;Yuwen Chen;Liujie Gu;Mingyuan Liu;Hengrong Lan;Jianwen Luo;Cheng Ma","doi":"10.1109/TMI.2025.3588789","DOIUrl":"10.1109/TMI.2025.3588789","url":null,"abstract":"Photoacoustic computed tomography (PACT) is an emerging hybrid imaging modality with potential applications in biomedicine. A major roadblock to the widespread adoption of PACT is the limited number of detectors, which gives rise to spatial aliasing and manifests as streak artifacts in the reconstructed image. A brute-force solution to the problem is to increase the number of detectors, which, however, is often undesirable due to escalated costs. In this study, we present a novel self-supervised learning approach, to overcome this long-standing challenge. We found that small blocks of PACT channel data show similarity at various downsampling rates. Based on this observation, a neural network trained on downsampled data can reliably perform accurate interpolation without requiring densely-sampled ground truth data, which is typically unavailable in real practice. Our method has undergone validation through numerical simulations, controlled phantom experiments, as well as ex vivo and in vivo animal tests, across multiple PACT systems. We have demonstrated that our technique provides an effective and cost-efficient solution to address the under-sampling issue in PACT, thereby enhancing the capabilities of this imaging technology.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 12","pages":"5117-5127"},"PeriodicalIF":0.0,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144629835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on medical imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1