首页 > 最新文献

Proceedings. IEEE International Symposium on Biomedical Imaging最新文献

英文 中文
SYNSTITCH: A SELF-SUPERVISED LEARNING NETWORK FOR ULTRASOUND IMAGE STITCHING USING SYNTHETIC TRAINING PAIRS AND INDIRECT SUPERVISION. Synstitch:采用合成训练对和间接监督的超声图像拼接自监督学习网络。
Pub Date : 2025-04-01 Epub Date: 2025-05-12 DOI: 10.1109/isbi60581.2025.10981027
Xing Yao, Runxuan Yu, Dewei Hu, Hao Yang, Ange Lou, Jiacheng Wang, Daiwei Lu, Gabriel Arenas, Baris Oguz, Alison Pouch, Nadav Schwartz, Brett C Byram, Ipek Oguz

Ultrasound (US) image stitching can expand the field-of-view (FOV) by combining multiple US images from varied probe positions. However, registering US images with only partially overlapping anatomical contents is a challenging task. In this work, we introduce SynStitch, a self-supervised framework designed for 2DUS stitching. SynStitch consists of a synthetic stitching pair generation module (SSPGM) and an image stitching module (ISM). SSPGM utilizes a patch-conditioned ControlNet to generate realistic 2DUS stitching pairs with known affine matrix from a single input image. ISM then utilizes this synthetic paired data to learn 2DUS stitching in a supervised manner. Our framework was evaluated against multiple leading methods on a kidney ultrasound dataset, demonstrating superior 2DUS stitching performance through both qualitative and quantitative analyses. The code will be made public upon acceptance of the paper.

超声图像拼接可以通过组合不同探头位置的多幅超声图像来扩大视场。然而,注册仅部分重叠解剖内容的美国图像是一项具有挑战性的任务。在这项工作中,我们介绍了SynStitch,一个为2DUS拼接设计的自监督框架。SynStitch由合成拼接对生成模块(SSPGM)和图像拼接模块(ISM)组成。SSPGM利用patch-conditioned ControlNet从单个输入图像中生成具有已知仿射矩阵的逼真的2DUS拼接对。然后,ISM利用这些合成的配对数据以监督的方式学习2DUS拼接。我们的框架在肾脏超声数据集上进行了多种领先方法的评估,通过定性和定量分析显示了优越的2DUS拼接性能。该准则将在论文被接受后对外公布。
{"title":"SYNSTITCH: A SELF-SUPERVISED LEARNING NETWORK FOR ULTRASOUND IMAGE STITCHING USING SYNTHETIC TRAINING PAIRS AND INDIRECT SUPERVISION.","authors":"Xing Yao, Runxuan Yu, Dewei Hu, Hao Yang, Ange Lou, Jiacheng Wang, Daiwei Lu, Gabriel Arenas, Baris Oguz, Alison Pouch, Nadav Schwartz, Brett C Byram, Ipek Oguz","doi":"10.1109/isbi60581.2025.10981027","DOIUrl":"10.1109/isbi60581.2025.10981027","url":null,"abstract":"<p><p>Ultrasound (US) image stitching can expand the field-of-view (FOV) by combining multiple US images from varied probe positions. However, registering US images with only partially overlapping anatomical contents is a challenging task. In this work, we introduce SynStitch, a self-supervised framework designed for 2DUS stitching. SynStitch consists of a synthetic stitching pair generation module (SSPGM) and an image stitching module (ISM). SSPGM utilizes a patch-conditioned ControlNet to generate realistic 2DUS stitching pairs with known affine matrix from a single input image. ISM then utilizes this synthetic paired data to learn 2DUS stitching in a supervised manner. Our framework was evaluated against multiple leading methods on a kidney ultrasound dataset, demonstrating superior 2DUS stitching performance through both qualitative and quantitative analyses. The code will be made public upon acceptance of the paper.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12175646/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144334594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
COUPLED SWIN TRANSFORMERS AND MULTI-APERTURES NETWORK(CSTA-NET) IMPROVES MEDICAL IMAGE SEGMENTATION. 耦合旋转变压器和多孔径网络(csta-net)改进了医学图像分割。
Pub Date : 2025-04-01 Epub Date: 2025-05-12 DOI: 10.1109/ISBI60581.2025.10981294
Siyavash Shabani, Muhammad Sohaib, Sahar A Mohamed, Bahram Parvin

Vision Transformers have outperformed traditional convolution-based frameworks across various visual tasks, including, but not limited to, the segmentation of 3D medical images. To further advance this area, this study introduces the Coupled Swin Transformers and Multi-Apertures Networks (CSTA-Net), which integrates the outputs of each Swin Transformer with an Aperture Network. Each aperture network consists of a convolution and a fusion block for combining global and local feature maps. The proposed model has been tested on two independent datasets to show that fine details are delineated. The proposed architecture was trained on the Synapse multi-organ and ACDC datasets to conclude an average Dice score of 90.19±0.05 and 93.77±0.04, respectively. The code is available here: https://github.com/Siyavashshabani/CSTANet.

视觉变形器在各种视觉任务中表现优于传统的基于卷积的框架,包括但不限于3D医学图像的分割。为了进一步推进这一领域,本研究引入了耦合Swin变压器和多孔径网络(CSTA-Net),它将每个Swin变压器的输出与孔径网络集成在一起。每个孔径网络由一个卷积和一个融合块组成,用于结合全局和局部特征映射。该模型已在两个独立的数据集上进行了测试,以表明该模型描述了精细的细节。在Synapse多器官和ACDC数据集上进行训练,得到的平均Dice得分分别为90.19±0.05和93.77±0.04。代码可从这里获得:https://github.com/Siyavashshabani/CSTANet。
{"title":"COUPLED SWIN TRANSFORMERS AND MULTI-APERTURES NETWORK(CSTA-NET) IMPROVES MEDICAL IMAGE SEGMENTATION.","authors":"Siyavash Shabani, Muhammad Sohaib, Sahar A Mohamed, Bahram Parvin","doi":"10.1109/ISBI60581.2025.10981294","DOIUrl":"10.1109/ISBI60581.2025.10981294","url":null,"abstract":"<p><p>Vision Transformers have outperformed traditional convolution-based frameworks across various visual tasks, including, but not limited to, the segmentation of 3D medical images. To further advance this area, this study introduces the Coupled Swin Transformers and Multi-Apertures Networks (CSTA-Net), which integrates the outputs of each Swin Transformer with an Aperture Network. Each aperture network consists of a convolution and a fusion block for combining global and local feature maps. The proposed model has been tested on two independent datasets to show that fine details are delineated. The proposed architecture was trained on the Synapse multi-organ and ACDC datasets to conclude an average Dice score of 90.19±0.05 and 93.77±0.04, respectively. The code is available here: https://github.com/Siyavashshabani/CSTANet.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12068877/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144048243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MAMBA-BASED RESIDUAL GENERATIVE ADVERSARIAL NETWORK FOR FUNCTIONAL CONNECTIVITY HARMONIZATION DURING INFANCY. 基于曼巴残差生成对抗网络的婴儿期功能连接协调。
Pub Date : 2025-04-01 Epub Date: 2025-05-12 DOI: 10.1109/isbi60581.2025.10981047
Weiran Xia, Xin Zhang, Dan Hu, Jiale Cheng, Weiyan Yin, Zhengwang Wu, Li Wang, Weili Lin, Gang Li

How to harmonize site effects is a fundamental challenge in modern multi-site neuroimaging studies. Although many statistical models and deep learning methods have been proposed to mitigate site effects while preserving biological characteristics, harmonization schemes for multi-site resting-state functional magnetic resonance imaging (rs-fMRI), particularly for functional connectivity (FC), remain undeveloped. Moreover, statistical models, though effective for region-level data, are inherently unsuitable for capturing complex, nonlinear mappings required for FC harmonization. To address these issues, we develop a novel, flexible deep learning method, Mamba-based Residual Generative adversarial network (MR-GAN), to harmonize multi-site functional connectivities. Our method leverages the Mamba Block, which has been proven effective in traditional visual tasks, to define FC-specified sequential patterns and integrate them with a multi-task residual GAN to harmonize multi-site FC data. Experiments on 939 infant rs-fMRI scans from four sites demonstrate the superior performance of the proposed method in harmonization compared to other approaches.

如何协调位点效应是现代多位点神经影像学研究的一个基本挑战。虽然已经提出了许多统计模型和深度学习方法来减轻位点效应,同时保留生物特征,但多位点静息状态功能磁共振成像(rs-fMRI)的协调方案,特别是功能连接(FC),仍然没有开发。此外,统计模型虽然对区域级数据有效,但本质上不适合捕获FC协调所需的复杂非线性映射。为了解决这些问题,我们开发了一种新颖,灵活的深度学习方法,基于曼巴的残差生成对抗网络(MR-GAN),以协调多站点功能连接。我们的方法利用已被证明在传统视觉任务中有效的曼巴块来定义FC指定的顺序模式,并将它们与多任务残差GAN集成以协调多站点FC数据。在939个来自四个地点的婴儿rs-fMRI扫描上的实验表明,与其他方法相比,所提出的方法在协调方面具有优越的性能。
{"title":"MAMBA-BASED RESIDUAL GENERATIVE ADVERSARIAL NETWORK FOR FUNCTIONAL CONNECTIVITY HARMONIZATION DURING INFANCY.","authors":"Weiran Xia, Xin Zhang, Dan Hu, Jiale Cheng, Weiyan Yin, Zhengwang Wu, Li Wang, Weili Lin, Gang Li","doi":"10.1109/isbi60581.2025.10981047","DOIUrl":"10.1109/isbi60581.2025.10981047","url":null,"abstract":"<p><p>How to harmonize site effects is a fundamental challenge in modern multi-site neuroimaging studies. Although many statistical models and deep learning methods have been proposed to mitigate site effects while preserving biological characteristics, harmonization schemes for multi-site resting-state functional magnetic resonance imaging (rs-fMRI), particularly for functional connectivity (FC), remain undeveloped. Moreover, statistical models, though effective for region-level data, are inherently unsuitable for capturing complex, nonlinear mappings required for FC harmonization. To address these issues, we develop a novel, flexible deep learning method, Mamba-based Residual Generative adversarial network (MR-GAN), to harmonize multi-site functional connectivities. Our method leverages the Mamba Block, which has been proven effective in traditional visual tasks, to define FC-specified sequential patterns and integrate them with a multi-task residual GAN to harmonize multi-site FC data. Experiments on 939 infant rs-fMRI scans from four sites demonstrate the superior performance of the proposed method in harmonization compared to other approaches.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12490067/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145234328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ACCELERATING QUANTITATIVE MRI USING SUBSPACE MULTISCALE ENERGY MODEL (SS-MUSE). 利用子空间多尺度能量模型(ss-muse)加速定量mri。
Pub Date : 2025-04-01 Epub Date: 2025-05-12 DOI: 10.1109/isbi60581.2025.10980741
Yan Chen, Jyothi Rikhab Chand, Steven R Kecskemeti, James H Holmes, Mathews Jacob

Multi-contrast MRI methods acquire multiple images with different contrast weightings, which are used for the differentiation of the tissue types or quantitative mapping. However, the scan time needed to acquire multiple contrasts is prohibitively long for 3D acquisition schemes, which can offer isotropic image resolution. While deep learning-based methods have been extensively used to accelerate 2D and 2D + time problems, the high memory demand, computation time, and need for large training data sets make them challenging for large-scale volumes. To address these challenges, we generalize the plug-and-play multi-scale energy-based model (MuSE) to a regularized subspace recovery setting, where we jointly regularize the 3D multi-contrast spatial factors in a subspace formulation. The explicit energy-based formulation allows us to use variable splitting optimization methods for computationally efficient recovery.

多重对比MRI方法获得不同对比度权重的多幅图像,用于组织类型的区分或定量制图。然而,对于可以提供各向同性图像分辨率的3D获取方案来说,获取多重对比度所需的扫描时间太长了。虽然基于深度学习的方法已被广泛用于加速2D和2D +时间问题,但高内存需求、计算时间和对大型训练数据集的需求使得它们对大规模容量具有挑战性。为了应对这些挑战,我们将即插即用的多尺度能量模型(MuSE)推广到正则化的子空间恢复设置中,在子空间公式中共同正则化3D多对比度空间因子。明确的基于能量的公式允许我们使用变量分裂优化方法进行计算效率的恢复。
{"title":"ACCELERATING QUANTITATIVE MRI USING SUBSPACE MULTISCALE ENERGY MODEL (SS-MUSE).","authors":"Yan Chen, Jyothi Rikhab Chand, Steven R Kecskemeti, James H Holmes, Mathews Jacob","doi":"10.1109/isbi60581.2025.10980741","DOIUrl":"10.1109/isbi60581.2025.10980741","url":null,"abstract":"<p><p>Multi-contrast MRI methods acquire multiple images with different contrast weightings, which are used for the differentiation of the tissue types or quantitative mapping. However, the scan time needed to acquire multiple contrasts is prohibitively long for 3D acquisition schemes, which can offer isotropic image resolution. While deep learning-based methods have been extensively used to accelerate 2D and 2D + time problems, the high memory demand, computation time, and need for large training data sets make them challenging for large-scale volumes. To address these challenges, we generalize the plug-and-play multi-scale energy-based model (MuSE) to a regularized subspace recovery setting, where we jointly regularize the 3D multi-contrast spatial factors in a subspace formulation. The explicit energy-based formulation allows us to use variable splitting optimization methods for computationally efficient recovery.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12381881/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144981918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HIERARCHICAL LOG BAYESIAN NEURAL NETWORK FOR ENHANCED AORTA SEGMENTATION. 分层对数贝叶斯神经网络增强主动脉分割。
Pub Date : 2025-04-01 Epub Date: 2025-05-12 DOI: 10.1109/isbi60581.2025.10980947
Delin An, Pan Du, Pengfei Gu, Jian-Xun Wang, Chaoli Wang

Accurate segmentation of the aorta and its associated arch branches is crucial for diagnosing aortic diseases. While deep learning techniques have significantly improved aorta segmentation, they remain challenging due to the intricate multiscale structure and the complexity of the surrounding tissues. This paper presents a novel approach for enhancing aorta segmentation using a Bayesian neural network-based hierarchical Laplacian of Gaussian (LoG) model. Our model consists of a 3D U-Net stream and a hierarchical LoG stream: the former provides an initial aorta segmentation, and the latter enhances blood vessel detection across varying scales by learning suitable LoG kernels, enabling self-adaptive handling of different parts of the aorta vessels with significant scale differences. We employ a Bayesian method to parameterize the LoG stream and provide confidence intervals for the segmentation results, ensuring robustness and reliability of the prediction for vascular medical image analysts. Experimental results show that our model can accurately segment main and supra-aortic vessels, yielding at least a 3% gain in the Dice coefficient over state-of-the-art methods across multiple volumes drawn from two aorta datasets, and can provide reliable confidence intervals for different parts of the aorta. The code is available at https://github.com/adlsn/LoGBNet.

主动脉及其相关弓支的准确分割对主动脉疾病的诊断至关重要。虽然深度学习技术显著改善了主动脉分割,但由于复杂的多尺度结构和周围组织的复杂性,它们仍然具有挑战性。本文提出了一种基于贝叶斯神经网络的分层高斯拉普拉斯(LoG)模型增强主动脉分割的新方法。我们的模型由3D U-Net流和分层LoG流组成:前者提供初始的主动脉分割,后者通过学习合适的LoG核来增强不同尺度上的血管检测,从而能够自适应处理具有显著尺度差异的主动脉血管的不同部分。我们采用贝叶斯方法对LoG流进行参数化,并为分割结果提供置信区间,确保血管医学图像分析预测的鲁棒性和可靠性。实验结果表明,我们的模型可以准确地分割主血管和主动脉上血管,与最先进的方法相比,在从两个主动脉数据集提取的多个体积中,Dice系数至少增加3%,并且可以为主动脉的不同部分提供可靠的置信区间。代码可在https://github.com/adlsn/LoGBNet上获得。
{"title":"HIERARCHICAL LOG BAYESIAN NEURAL NETWORK FOR ENHANCED AORTA SEGMENTATION.","authors":"Delin An, Pan Du, Pengfei Gu, Jian-Xun Wang, Chaoli Wang","doi":"10.1109/isbi60581.2025.10980947","DOIUrl":"10.1109/isbi60581.2025.10980947","url":null,"abstract":"<p><p>Accurate segmentation of the aorta and its associated arch branches is crucial for diagnosing aortic diseases. While deep learning techniques have significantly improved aorta segmentation, they remain challenging due to the intricate multiscale structure and the complexity of the surrounding tissues. This paper presents a novel approach for enhancing aorta segmentation using a Bayesian neural network-based hierarchical Laplacian of Gaussian (LoG) model. Our model consists of a 3D U-Net stream and a hierarchical LoG stream: the former provides an initial aorta segmentation, and the latter enhances blood vessel detection across varying scales by learning suitable LoG kernels, enabling self-adaptive handling of different parts of the aorta vessels with significant scale differences. We employ a Bayesian method to parameterize the LoG stream and provide confidence intervals for the segmentation results, ensuring robustness and reliability of the prediction for vascular medical image analysts. Experimental results show that our model can accurately segment main and supra-aortic vessels, yielding at least a 3% gain in the Dice coefficient over state-of-the-art methods across multiple volumes drawn from two aorta datasets, and can provide reliable confidence intervals for different parts of the aorta. The code is available at https://github.com/adlsn/LoGBNet.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12459665/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145152271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CT CONTRAST PHASE IDENTIFICATION BY PREDICTING THE TEMPORAL ANGLE USING CIRCULAR REGRESSION. 利用圆形回归预测时间角的Ct对比相位识别。
Pub Date : 2025-04-01 Epub Date: 2025-05-12 DOI: 10.1109/isbi60581.2025.10980877
Dingjie Su, Katherine D Van Schaik, Lucas W Remedios, Thomas Li, Fabien Maldonado, Kim L Sandler, Benoit M Dawant, Bennett A Landman

Contrast enhancement is widely used in computed tomography (CT) scans, where radiocontrast agents circulate through the bloodstream and accumulate in the vasculature, creating visual contrast between blood vessels and surrounding tissues. This work introduces a technique to predict the timing of contrast in a CT scan, a key factor influencing the contrast effect, using circular regression models. Specifically, we represent the contrast timing as unit vectors on a circle and employ 2D convolutional neural networks to predict it based on predefined anchor time points. Unlike previous methods that treat contrast timing as discrete phases, our approach is the first method that views it as a continuous variable, offering a more fine-grained understanding of contrast differences, particularly in relation to patient-specific vascular effects. We train the model on 877 CT scans and test it on 112 scans from different subjects, achieving a classification accuracy of 93.8%, which is similar to state-of-the-art results reported in the literature. We compare our method to other 2D and 3D classification-based approaches, demonstrating that our regression model have overall better performance than the classification models. Additionally, we explore the relationship between contrast timing and the anatomical positions of CT slices, aiming to leverage positional information to improve the prediction accuracy, which is a promising direction that has not been studied.

对比增强广泛应用于计算机断层扫描(CT)扫描,其中放射对比剂在血液中循环并在脉管系统中积聚,在血管和周围组织之间形成视觉对比。本文介绍了一种利用循环回归模型预测CT扫描中造影剂时间的技术,这是影响造影剂效果的关键因素。具体来说,我们将对比时间表示为圆上的单位向量,并使用二维卷积神经网络基于预定义的锚点进行预测。与之前将对比时间视为离散阶段的方法不同,我们的方法是第一个将对比时间视为连续变量的方法,提供了对对比差异的更细粒度的理解,特别是与患者特异性血管效应相关的差异。我们在877个CT扫描上训练了该模型,并在112个不同受试者的扫描上对其进行了测试,达到了93.8%的分类准确率,这与文献中报道的最新结果相似。我们将我们的方法与其他基于2D和3D分类的方法进行了比较,表明我们的回归模型总体上比分类模型具有更好的性能。此外,我们还探讨了对比时间与CT切片解剖位置之间的关系,旨在利用位置信息来提高预测精度,这是一个尚未研究的有前途的方向。
{"title":"CT CONTRAST PHASE IDENTIFICATION BY PREDICTING THE TEMPORAL ANGLE USING CIRCULAR REGRESSION.","authors":"Dingjie Su, Katherine D Van Schaik, Lucas W Remedios, Thomas Li, Fabien Maldonado, Kim L Sandler, Benoit M Dawant, Bennett A Landman","doi":"10.1109/isbi60581.2025.10980877","DOIUrl":"10.1109/isbi60581.2025.10980877","url":null,"abstract":"<p><p>Contrast enhancement is widely used in computed tomography (CT) scans, where radiocontrast agents circulate through the bloodstream and accumulate in the vasculature, creating visual contrast between blood vessels and surrounding tissues. This work introduces a technique to predict the timing of contrast in a CT scan, a key factor influencing the contrast effect, using circular regression models. Specifically, we represent the contrast timing as unit vectors on a circle and employ 2D convolutional neural networks to predict it based on predefined anchor time points. Unlike previous methods that treat contrast timing as discrete phases, our approach is the first method that views it as a continuous variable, offering a more fine-grained understanding of contrast differences, particularly in relation to patient-specific vascular effects. We train the model on 877 CT scans and test it on 112 scans from different subjects, achieving a classification accuracy of 93.8%, which is similar to state-of-the-art results reported in the literature. We compare our method to other 2D and 3D classification-based approaches, demonstrating that our regression model have overall better performance than the classification models. Additionally, we explore the relationship between contrast timing and the anatomical positions of CT slices, aiming to leverage positional information to improve the prediction accuracy, which is a promising direction that has not been studied.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12352434/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144877196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TV-BASED DEEP 3D SELF SUPER-RESOLUTION FOR FMRI. 基于电视的fmri深度3d自超分辨率。
Pub Date : 2025-04-01 Epub Date: 2025-05-12 DOI: 10.1109/isbi60581.2025.10980709
Fernando Pérez-Bueno, Hongwei B Li, Matthew S Rosen, Shahin Nasr, César Caballero-Gaudes, Juan E Iglesias

While functional Magnetic Resonance Imaging (fMRI) offers valuable insights into cognitive processes, its inherent spatial limitations pose challenges for detailed analysis of the fine-grained functional architecture of the brain. More specifically, MRI scanner and sequence specifications impose a trade-off between temporal resolution, spatial resolution, signal-to-noise ratio, and scan time. Deep Learning (DL) Super-Resolution (SR) methods have emerged as a promising solution to enhance fMRI resolution, generating high-resolution (HR) images from low-resolution (LR) images typically acquired with lower scanning times. However, most existing SR approaches depend on supervised DL techniques, which require training ground truth (GT) HR data, which is often difficult to acquire and simultaneously sets a bound for how far SR can go. In this paper, we introduce a novel self-supervised DL SR model that combines a DL network with an analytical approach and Total Variation (TV) regularization. Our method eliminates the need for external GT images, achieving competitive performance compared to supervised DL techniques and preserving the functional maps.

虽然功能性磁共振成像(fMRI)为认知过程提供了有价值的见解,但其固有的空间局限性给大脑细粒度功能结构的详细分析带来了挑战。更具体地说,MRI扫描仪和序列规范要求在时间分辨率、空间分辨率、信噪比和扫描时间之间进行权衡。深度学习(DL)超分辨率(SR)方法已经成为提高功能磁共振成像分辨率的一种有前途的解决方案,它可以从低分辨率(LR)图像中生成高分辨率(HR)图像,而低分辨率(LR)图像通常需要更短的扫描时间。然而,大多数现有的SR方法依赖于有监督的深度学习技术,这需要训练场真值(GT) HR数据,这些数据通常很难获得,同时也为SR可以走多远设置了一个界限。在本文中,我们引入了一种新的自监督DL SR模型,该模型将DL网络与解析方法和全变分(TV)正则化相结合。我们的方法消除了对外部GT图像的需求,与有监督的深度学习技术相比,实现了具有竞争力的性能,并保留了功能图。
{"title":"TV-BASED DEEP 3D SELF SUPER-RESOLUTION FOR FMRI.","authors":"Fernando Pérez-Bueno, Hongwei B Li, Matthew S Rosen, Shahin Nasr, César Caballero-Gaudes, Juan E Iglesias","doi":"10.1109/isbi60581.2025.10980709","DOIUrl":"https://doi.org/10.1109/isbi60581.2025.10980709","url":null,"abstract":"<p><p>While functional Magnetic Resonance Imaging (fMRI) offers valuable insights into cognitive processes, its inherent spatial limitations pose challenges for detailed analysis of the fine-grained functional architecture of the brain. More specifically, MRI scanner and sequence specifications impose a trade-off between temporal resolution, spatial resolution, signal-to-noise ratio, and scan time. Deep Learning (DL) Super-Resolution (SR) methods have emerged as a promising solution to enhance fMRI resolution, generating high-resolution (HR) images from low-resolution (LR) images typically acquired with lower scanning times. However, most existing SR approaches depend on supervised DL techniques, which require training ground truth (GT) HR data, which is often difficult to acquire and simultaneously sets a bound for how far SR can go. In this paper, we introduce a novel self-supervised DL SR model that combines a DL network with an analytical approach and Total Variation (TV) regularization. Our method eliminates the need for external GT images, achieving competitive performance compared to supervised DL techniques and preserving the functional maps.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12370177/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144981903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LEVERAGING CONTRAST AGENT KINETICS FOR ROBUST REFLECTANCE MODE FLUORESCENCE TOMOGRAPHY. 利用造影剂动力学稳健反射模式荧光断层扫描。
Pub Date : 2025-04-01 Epub Date: 2025-05-12 DOI: 10.1109/isbi60581.2025.10980828
Mariella Kast, Mykhaylo Zayats, Shayan Shafiee, Sergiy Zhuk, Jan S Hesthaven, Amit Joshi

Fluorescence Image Guided Surgery utilizes continuous wave epi-fluorescence measurements on the tissue surface to locate targets such as tumors or lymph nodes, but precise 3D localization of deep targets remains intractable due to the illposedness of the associated inverse problem. We propose a Fluorescence Diffuse Optical Tomography scheme which leverages the different contrast agent kinetics in malignant vs normal tissue and reconstructs the 3D tumor location from a time series of epi-fluorescence measurements. We conduct sequential synthetic experiments, which mimic the differential uptake and release profile of fluorescent dye ICG in tumors vs normal tissue and demonstrate for the first time that the proposed method can robustly recover targets up to 1cm deep and in the presence of realistic tumor-to-background ratios.

荧光图像引导手术利用组织表面的连续波外荧光测量来定位肿瘤或淋巴结等目标,但由于相关逆问题的不稳定性,深层目标的精确3D定位仍然难以实现。我们提出了一种荧光漫射光学断层扫描方案,该方案利用恶性组织与正常组织中不同的造影剂动力学,并从时间序列的外溢荧光测量中重建3D肿瘤位置。我们进行了连续的合成实验,模拟了肿瘤与正常组织中荧光染料ICG的不同摄取和释放特征,并首次证明了所提出的方法可以在实际肿瘤与背景比存在的情况下可靠地恢复高达1cm深的靶标。
{"title":"LEVERAGING CONTRAST AGENT KINETICS FOR ROBUST REFLECTANCE MODE FLUORESCENCE TOMOGRAPHY.","authors":"Mariella Kast, Mykhaylo Zayats, Shayan Shafiee, Sergiy Zhuk, Jan S Hesthaven, Amit Joshi","doi":"10.1109/isbi60581.2025.10980828","DOIUrl":"10.1109/isbi60581.2025.10980828","url":null,"abstract":"<p><p>Fluorescence Image Guided Surgery utilizes continuous wave epi-fluorescence measurements on the tissue surface to locate targets such as tumors or lymph nodes, but precise 3D localization of deep targets remains intractable due to the illposedness of the associated inverse problem. We propose a Fluorescence Diffuse Optical Tomography scheme which leverages the different contrast agent kinetics in malignant vs normal tissue and reconstructs the 3D tumor location from a time series of epi-fluorescence measurements. We conduct sequential synthetic experiments, which mimic the differential uptake and release profile of fluorescent dye ICG in tumors vs normal tissue and demonstrate for the first time that the proposed method can robustly recover targets up to 1cm deep and in the presence of realistic tumor-to-background ratios.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12165278/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144303907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TAU PET HARMONIZATION VIA SURFACE-BASED DIFFUSION MODEL. 基于表面扩散模型的Tau pet调和。
Pub Date : 2025-04-01 Epub Date: 2025-05-12 DOI: 10.1109/isbi60581.2025.10981166
Jiaxin Yue, Jianwei Zhang, Lujia Zhong, Yonggang Shi

The heterogeneity inherent in tau positron emission tomography (PET) imaging data across different tracers challenges the integration of multi-site tau PET data, thereby necessitating the trustful harmonization technique for a better utilization of the emerging large-scale datasets. Unlike other imaging modalities, the harmonization among multi-site tau PET data involves more than intensity mapping but contains intricate pattern alterations attributed to tracer binding properties, which makes the existing statistical methods inadequate. Meanwhile, the effective data preprocessing is required to eliminate the artifacts caused by off-target binding and partial volume effect for meaningful comparison and harmonization. In this paper, we propose a systematic tau PET harmonization framework that involves the surface-based data preprocessing and diffusion model for generating the vertex-wise mapping between multi-site tau standardized uptake value ratio (SUVR) on the cortical surface. In the experiments, using large-scale Alzheimer's Disease Neuroimaging Initiative (ADNI) and Health and Aging Brain Study-Health Disparities (HABS-HD) data with different tracers, we demonstrate our method can successfully achieve harmonization by generating the SUVR maps with consistent pattern distributions and persevering the individual variability.

不同示踪剂的tau正电子发射断层扫描(PET)成像数据固有的异质性对多位点tau PET数据的整合提出了挑战,因此需要可信的协调技术来更好地利用新兴的大规模数据集。与其他成像方式不同,多位点tau PET数据之间的协调涉及的不仅仅是强度映射,而且包含归因于示踪剂结合特性的复杂模式改变,这使得现有的统计方法不充分。同时,需要对数据进行有效的预处理,消除脱靶绑定和部分体积效应带来的伪影,实现有意义的比较和协调。在本文中,我们提出了一个系统的tau PET协调框架,该框架涉及基于表面的数据预处理和扩散模型,用于生成皮质表面上多位点tau标准化摄取值比(SUVR)之间的顶点映射。在实验中,使用不同示踪剂的大规模阿尔茨海默病神经成像计划(ADNI)和健康与衰老大脑研究-健康差异(HABS-HD)数据,我们证明了我们的方法可以通过生成具有一致模式分布的SUVR图并保持个体差异来成功地实现协调。
{"title":"TAU PET HARMONIZATION VIA SURFACE-BASED DIFFUSION MODEL.","authors":"Jiaxin Yue, Jianwei Zhang, Lujia Zhong, Yonggang Shi","doi":"10.1109/isbi60581.2025.10981166","DOIUrl":"https://doi.org/10.1109/isbi60581.2025.10981166","url":null,"abstract":"<p><p>The heterogeneity inherent in tau positron emission tomography (PET) imaging data across different tracers challenges the integration of multi-site tau PET data, thereby necessitating the trustful harmonization technique for a better utilization of the emerging large-scale datasets. Unlike other imaging modalities, the harmonization among multi-site tau PET data involves more than intensity mapping but contains intricate pattern alterations attributed to tracer binding properties, which makes the existing statistical methods inadequate. Meanwhile, the effective data preprocessing is required to eliminate the artifacts caused by off-target binding and partial volume effect for meaningful comparison and harmonization. In this paper, we propose a systematic tau PET harmonization framework that involves the surface-based data preprocessing and diffusion model for generating the vertex-wise mapping between multi-site tau standardized uptake value ratio (SUVR) on the cortical surface. In the experiments, using large-scale Alzheimer's Disease Neuroimaging Initiative (ADNI) and Health and Aging Brain Study-Health Disparities (HABS-HD) data with different tracers, we demonstrate our method can successfully achieve harmonization by generating the SUVR maps with consistent pattern distributions and persevering the individual variability.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12381844/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144981854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DUAL PROMPTING FOR DIVERSE COUNT-LEVEL PET DENOISING. 双重提示不同计数级宠物去噪。
Pub Date : 2025-04-01 Epub Date: 2025-05-12 DOI: 10.1109/isbi60581.2025.10980695
Xiaofeng Liu, Yongsong Huang, Thibault Marin, Samira Vafay Eslahi, Amal Tiss, Yanis Chemli, Keith A Johnson, Georges El Fakhri, Jinsong Ouyang

The to-be-denoised positron emission tomography (PET) volumes are inherent with diverse count levels, which imposes challenges for a unified model to tackle varied cases. In this work, we resort to the recently flourished prompt learning to achieve generalizable PET denoising with different count levels. Specifically, we propose dual prompts to guide the PET denoising in a divide-and-conquer manner, i.e., an explicitly count-level prompt to provide the specific prior information and an implicitly general denoising prompt to encode the essential PET denoising knowledge. Then, a novel prompt fusion module is developed to unify the heterogeneous prompts, followed by a prompt-feature interaction module to inject prompts into the features. The prompts are able to dynamically guide the noise-conditioned denoising process. Therefore, we are able to efficiently train a unified denoising model for various count levels, and deploy it to different cases with personalized prompts. We evaluated on 1940 low-count PET 3D volumes with uniformly randomly selected 13-22% fractions of events from 97 18F-MK6240 tau PET studies. It shows our dual prompting can largely improve the performance with informed count-level and outperform the count-conditional model.

待去噪的正电子发射断层扫描(PET)体积具有不同的计数水平,这给统一模型解决各种情况带来了挑战。在这项工作中,我们利用最近蓬勃发展的提示学习来实现不同计数水平的可泛化PET去噪。具体来说,我们提出了双重提示,以分而治之的方式来指导PET去噪,即一个明确的计数级提示来提供特定的先验信息,一个隐式的一般去噪提示来编码基本的PET去噪知识。然后,开发了一种新的提示融合模块来统一异构提示,然后开发了提示-特征交互模块将提示注入到特征中。提示符能够动态地引导噪声条件下的去噪过程。因此,我们能够有效地训练不同计数级别的统一去噪模型,并通过个性化提示将其部署到不同的情况下。我们从97个18F-MK6240 tau PET研究中均匀随机选择13-22%的事件分数,对1940个低计数PET 3D体积进行了评估。结果表明,我们的双提示可以在很大程度上提高知情计数级的性能,并且优于计数条件模型。
{"title":"DUAL PROMPTING FOR DIVERSE COUNT-LEVEL PET DENOISING.","authors":"Xiaofeng Liu, Yongsong Huang, Thibault Marin, Samira Vafay Eslahi, Amal Tiss, Yanis Chemli, Keith A Johnson, Georges El Fakhri, Jinsong Ouyang","doi":"10.1109/isbi60581.2025.10980695","DOIUrl":"10.1109/isbi60581.2025.10980695","url":null,"abstract":"<p><p>The to-be-denoised positron emission tomography (PET) volumes are inherent with diverse count levels, which imposes challenges for a unified model to tackle varied cases. In this work, we resort to the recently flourished prompt learning to achieve generalizable PET denoising with different count levels. Specifically, we propose dual prompts to guide the PET denoising in a divide-and-conquer manner, i.e., an explicitly count-level prompt to provide the specific prior information and an implicitly general denoising prompt to encode the essential PET denoising knowledge. Then, a novel prompt fusion module is developed to unify the heterogeneous prompts, followed by a prompt-feature interaction module to inject prompts into the features. The prompts are able to dynamically guide the noise-conditioned denoising process. Therefore, we are able to efficiently train a unified denoising model for various count levels, and deploy it to different cases with personalized prompts. We evaluated on 1940 low-count PET 3D volumes with uniformly randomly selected 13-22% fractions of events from 97 <sup>18</sup>F-MK6240 tau PET studies. It shows our dual prompting can largely improve the performance with informed count-level and outperform the count-conditional model.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12360122/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144884477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings. IEEE International Symposium on Biomedical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1