Pub Date : 2025-04-01Epub Date: 2025-05-12DOI: 10.1109/isbi60581.2025.10981027
Xing Yao, Runxuan Yu, Dewei Hu, Hao Yang, Ange Lou, Jiacheng Wang, Daiwei Lu, Gabriel Arenas, Baris Oguz, Alison Pouch, Nadav Schwartz, Brett C Byram, Ipek Oguz
Ultrasound (US) image stitching can expand the field-of-view (FOV) by combining multiple US images from varied probe positions. However, registering US images with only partially overlapping anatomical contents is a challenging task. In this work, we introduce SynStitch, a self-supervised framework designed for 2DUS stitching. SynStitch consists of a synthetic stitching pair generation module (SSPGM) and an image stitching module (ISM). SSPGM utilizes a patch-conditioned ControlNet to generate realistic 2DUS stitching pairs with known affine matrix from a single input image. ISM then utilizes this synthetic paired data to learn 2DUS stitching in a supervised manner. Our framework was evaluated against multiple leading methods on a kidney ultrasound dataset, demonstrating superior 2DUS stitching performance through both qualitative and quantitative analyses. The code will be made public upon acceptance of the paper.
{"title":"SYNSTITCH: A SELF-SUPERVISED LEARNING NETWORK FOR ULTRASOUND IMAGE STITCHING USING SYNTHETIC TRAINING PAIRS AND INDIRECT SUPERVISION.","authors":"Xing Yao, Runxuan Yu, Dewei Hu, Hao Yang, Ange Lou, Jiacheng Wang, Daiwei Lu, Gabriel Arenas, Baris Oguz, Alison Pouch, Nadav Schwartz, Brett C Byram, Ipek Oguz","doi":"10.1109/isbi60581.2025.10981027","DOIUrl":"10.1109/isbi60581.2025.10981027","url":null,"abstract":"<p><p>Ultrasound (US) image stitching can expand the field-of-view (FOV) by combining multiple US images from varied probe positions. However, registering US images with only partially overlapping anatomical contents is a challenging task. In this work, we introduce SynStitch, a self-supervised framework designed for 2DUS stitching. SynStitch consists of a synthetic stitching pair generation module (SSPGM) and an image stitching module (ISM). SSPGM utilizes a patch-conditioned ControlNet to generate realistic 2DUS stitching pairs with known affine matrix from a single input image. ISM then utilizes this synthetic paired data to learn 2DUS stitching in a supervised manner. Our framework was evaluated against multiple leading methods on a kidney ultrasound dataset, demonstrating superior 2DUS stitching performance through both qualitative and quantitative analyses. The code will be made public upon acceptance of the paper.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12175646/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144334594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-05-12DOI: 10.1109/ISBI60581.2025.10981294
Siyavash Shabani, Muhammad Sohaib, Sahar A Mohamed, Bahram Parvin
Vision Transformers have outperformed traditional convolution-based frameworks across various visual tasks, including, but not limited to, the segmentation of 3D medical images. To further advance this area, this study introduces the Coupled Swin Transformers and Multi-Apertures Networks (CSTA-Net), which integrates the outputs of each Swin Transformer with an Aperture Network. Each aperture network consists of a convolution and a fusion block for combining global and local feature maps. The proposed model has been tested on two independent datasets to show that fine details are delineated. The proposed architecture was trained on the Synapse multi-organ and ACDC datasets to conclude an average Dice score of 90.19±0.05 and 93.77±0.04, respectively. The code is available here: https://github.com/Siyavashshabani/CSTANet.
{"title":"COUPLED SWIN TRANSFORMERS AND MULTI-APERTURES NETWORK(CSTA-NET) IMPROVES MEDICAL IMAGE SEGMENTATION.","authors":"Siyavash Shabani, Muhammad Sohaib, Sahar A Mohamed, Bahram Parvin","doi":"10.1109/ISBI60581.2025.10981294","DOIUrl":"10.1109/ISBI60581.2025.10981294","url":null,"abstract":"<p><p>Vision Transformers have outperformed traditional convolution-based frameworks across various visual tasks, including, but not limited to, the segmentation of 3D medical images. To further advance this area, this study introduces the Coupled Swin Transformers and Multi-Apertures Networks (CSTA-Net), which integrates the outputs of each Swin Transformer with an Aperture Network. Each aperture network consists of a convolution and a fusion block for combining global and local feature maps. The proposed model has been tested on two independent datasets to show that fine details are delineated. The proposed architecture was trained on the Synapse multi-organ and ACDC datasets to conclude an average Dice score of 90.19±0.05 and 93.77±0.04, respectively. The code is available here: https://github.com/Siyavashshabani/CSTANet.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12068877/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144048243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-05-12DOI: 10.1109/isbi60581.2025.10981047
Weiran Xia, Xin Zhang, Dan Hu, Jiale Cheng, Weiyan Yin, Zhengwang Wu, Li Wang, Weili Lin, Gang Li
How to harmonize site effects is a fundamental challenge in modern multi-site neuroimaging studies. Although many statistical models and deep learning methods have been proposed to mitigate site effects while preserving biological characteristics, harmonization schemes for multi-site resting-state functional magnetic resonance imaging (rs-fMRI), particularly for functional connectivity (FC), remain undeveloped. Moreover, statistical models, though effective for region-level data, are inherently unsuitable for capturing complex, nonlinear mappings required for FC harmonization. To address these issues, we develop a novel, flexible deep learning method, Mamba-based Residual Generative adversarial network (MR-GAN), to harmonize multi-site functional connectivities. Our method leverages the Mamba Block, which has been proven effective in traditional visual tasks, to define FC-specified sequential patterns and integrate them with a multi-task residual GAN to harmonize multi-site FC data. Experiments on 939 infant rs-fMRI scans from four sites demonstrate the superior performance of the proposed method in harmonization compared to other approaches.
{"title":"MAMBA-BASED RESIDUAL GENERATIVE ADVERSARIAL NETWORK FOR FUNCTIONAL CONNECTIVITY HARMONIZATION DURING INFANCY.","authors":"Weiran Xia, Xin Zhang, Dan Hu, Jiale Cheng, Weiyan Yin, Zhengwang Wu, Li Wang, Weili Lin, Gang Li","doi":"10.1109/isbi60581.2025.10981047","DOIUrl":"10.1109/isbi60581.2025.10981047","url":null,"abstract":"<p><p>How to harmonize site effects is a fundamental challenge in modern multi-site neuroimaging studies. Although many statistical models and deep learning methods have been proposed to mitigate site effects while preserving biological characteristics, harmonization schemes for multi-site resting-state functional magnetic resonance imaging (rs-fMRI), particularly for functional connectivity (FC), remain undeveloped. Moreover, statistical models, though effective for region-level data, are inherently unsuitable for capturing complex, nonlinear mappings required for FC harmonization. To address these issues, we develop a novel, flexible deep learning method, Mamba-based Residual Generative adversarial network (MR-GAN), to harmonize multi-site functional connectivities. Our method leverages the Mamba Block, which has been proven effective in traditional visual tasks, to define FC-specified sequential patterns and integrate them with a multi-task residual GAN to harmonize multi-site FC data. Experiments on 939 infant rs-fMRI scans from four sites demonstrate the superior performance of the proposed method in harmonization compared to other approaches.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12490067/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145234328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-05-12DOI: 10.1109/isbi60581.2025.10980741
Yan Chen, Jyothi Rikhab Chand, Steven R Kecskemeti, James H Holmes, Mathews Jacob
Multi-contrast MRI methods acquire multiple images with different contrast weightings, which are used for the differentiation of the tissue types or quantitative mapping. However, the scan time needed to acquire multiple contrasts is prohibitively long for 3D acquisition schemes, which can offer isotropic image resolution. While deep learning-based methods have been extensively used to accelerate 2D and 2D + time problems, the high memory demand, computation time, and need for large training data sets make them challenging for large-scale volumes. To address these challenges, we generalize the plug-and-play multi-scale energy-based model (MuSE) to a regularized subspace recovery setting, where we jointly regularize the 3D multi-contrast spatial factors in a subspace formulation. The explicit energy-based formulation allows us to use variable splitting optimization methods for computationally efficient recovery.
{"title":"ACCELERATING QUANTITATIVE MRI USING SUBSPACE MULTISCALE ENERGY MODEL (SS-MUSE).","authors":"Yan Chen, Jyothi Rikhab Chand, Steven R Kecskemeti, James H Holmes, Mathews Jacob","doi":"10.1109/isbi60581.2025.10980741","DOIUrl":"10.1109/isbi60581.2025.10980741","url":null,"abstract":"<p><p>Multi-contrast MRI methods acquire multiple images with different contrast weightings, which are used for the differentiation of the tissue types or quantitative mapping. However, the scan time needed to acquire multiple contrasts is prohibitively long for 3D acquisition schemes, which can offer isotropic image resolution. While deep learning-based methods have been extensively used to accelerate 2D and 2D + time problems, the high memory demand, computation time, and need for large training data sets make them challenging for large-scale volumes. To address these challenges, we generalize the plug-and-play multi-scale energy-based model (MuSE) to a regularized subspace recovery setting, where we jointly regularize the 3D multi-contrast spatial factors in a subspace formulation. The explicit energy-based formulation allows us to use variable splitting optimization methods for computationally efficient recovery.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12381881/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144981918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-05-12DOI: 10.1109/isbi60581.2025.10980947
Delin An, Pan Du, Pengfei Gu, Jian-Xun Wang, Chaoli Wang
Accurate segmentation of the aorta and its associated arch branches is crucial for diagnosing aortic diseases. While deep learning techniques have significantly improved aorta segmentation, they remain challenging due to the intricate multiscale structure and the complexity of the surrounding tissues. This paper presents a novel approach for enhancing aorta segmentation using a Bayesian neural network-based hierarchical Laplacian of Gaussian (LoG) model. Our model consists of a 3D U-Net stream and a hierarchical LoG stream: the former provides an initial aorta segmentation, and the latter enhances blood vessel detection across varying scales by learning suitable LoG kernels, enabling self-adaptive handling of different parts of the aorta vessels with significant scale differences. We employ a Bayesian method to parameterize the LoG stream and provide confidence intervals for the segmentation results, ensuring robustness and reliability of the prediction for vascular medical image analysts. Experimental results show that our model can accurately segment main and supra-aortic vessels, yielding at least a 3% gain in the Dice coefficient over state-of-the-art methods across multiple volumes drawn from two aorta datasets, and can provide reliable confidence intervals for different parts of the aorta. The code is available at https://github.com/adlsn/LoGBNet.
{"title":"HIERARCHICAL LOG BAYESIAN NEURAL NETWORK FOR ENHANCED AORTA SEGMENTATION.","authors":"Delin An, Pan Du, Pengfei Gu, Jian-Xun Wang, Chaoli Wang","doi":"10.1109/isbi60581.2025.10980947","DOIUrl":"10.1109/isbi60581.2025.10980947","url":null,"abstract":"<p><p>Accurate segmentation of the aorta and its associated arch branches is crucial for diagnosing aortic diseases. While deep learning techniques have significantly improved aorta segmentation, they remain challenging due to the intricate multiscale structure and the complexity of the surrounding tissues. This paper presents a novel approach for enhancing aorta segmentation using a Bayesian neural network-based hierarchical Laplacian of Gaussian (LoG) model. Our model consists of a 3D U-Net stream and a hierarchical LoG stream: the former provides an initial aorta segmentation, and the latter enhances blood vessel detection across varying scales by learning suitable LoG kernels, enabling self-adaptive handling of different parts of the aorta vessels with significant scale differences. We employ a Bayesian method to parameterize the LoG stream and provide confidence intervals for the segmentation results, ensuring robustness and reliability of the prediction for vascular medical image analysts. Experimental results show that our model can accurately segment main and supra-aortic vessels, yielding at least a 3% gain in the Dice coefficient over state-of-the-art methods across multiple volumes drawn from two aorta datasets, and can provide reliable confidence intervals for different parts of the aorta. The code is available at https://github.com/adlsn/LoGBNet.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12459665/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145152271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-05-12DOI: 10.1109/isbi60581.2025.10980877
Dingjie Su, Katherine D Van Schaik, Lucas W Remedios, Thomas Li, Fabien Maldonado, Kim L Sandler, Benoit M Dawant, Bennett A Landman
Contrast enhancement is widely used in computed tomography (CT) scans, where radiocontrast agents circulate through the bloodstream and accumulate in the vasculature, creating visual contrast between blood vessels and surrounding tissues. This work introduces a technique to predict the timing of contrast in a CT scan, a key factor influencing the contrast effect, using circular regression models. Specifically, we represent the contrast timing as unit vectors on a circle and employ 2D convolutional neural networks to predict it based on predefined anchor time points. Unlike previous methods that treat contrast timing as discrete phases, our approach is the first method that views it as a continuous variable, offering a more fine-grained understanding of contrast differences, particularly in relation to patient-specific vascular effects. We train the model on 877 CT scans and test it on 112 scans from different subjects, achieving a classification accuracy of 93.8%, which is similar to state-of-the-art results reported in the literature. We compare our method to other 2D and 3D classification-based approaches, demonstrating that our regression model have overall better performance than the classification models. Additionally, we explore the relationship between contrast timing and the anatomical positions of CT slices, aiming to leverage positional information to improve the prediction accuracy, which is a promising direction that has not been studied.
{"title":"CT CONTRAST PHASE IDENTIFICATION BY PREDICTING THE TEMPORAL ANGLE USING CIRCULAR REGRESSION.","authors":"Dingjie Su, Katherine D Van Schaik, Lucas W Remedios, Thomas Li, Fabien Maldonado, Kim L Sandler, Benoit M Dawant, Bennett A Landman","doi":"10.1109/isbi60581.2025.10980877","DOIUrl":"10.1109/isbi60581.2025.10980877","url":null,"abstract":"<p><p>Contrast enhancement is widely used in computed tomography (CT) scans, where radiocontrast agents circulate through the bloodstream and accumulate in the vasculature, creating visual contrast between blood vessels and surrounding tissues. This work introduces a technique to predict the timing of contrast in a CT scan, a key factor influencing the contrast effect, using circular regression models. Specifically, we represent the contrast timing as unit vectors on a circle and employ 2D convolutional neural networks to predict it based on predefined anchor time points. Unlike previous methods that treat contrast timing as discrete phases, our approach is the first method that views it as a continuous variable, offering a more fine-grained understanding of contrast differences, particularly in relation to patient-specific vascular effects. We train the model on 877 CT scans and test it on 112 scans from different subjects, achieving a classification accuracy of 93.8%, which is similar to state-of-the-art results reported in the literature. We compare our method to other 2D and 3D classification-based approaches, demonstrating that our regression model have overall better performance than the classification models. Additionally, we explore the relationship between contrast timing and the anatomical positions of CT slices, aiming to leverage positional information to improve the prediction accuracy, which is a promising direction that has not been studied.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12352434/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144877196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-05-12DOI: 10.1109/isbi60581.2025.10980709
Fernando Pérez-Bueno, Hongwei B Li, Matthew S Rosen, Shahin Nasr, César Caballero-Gaudes, Juan E Iglesias
While functional Magnetic Resonance Imaging (fMRI) offers valuable insights into cognitive processes, its inherent spatial limitations pose challenges for detailed analysis of the fine-grained functional architecture of the brain. More specifically, MRI scanner and sequence specifications impose a trade-off between temporal resolution, spatial resolution, signal-to-noise ratio, and scan time. Deep Learning (DL) Super-Resolution (SR) methods have emerged as a promising solution to enhance fMRI resolution, generating high-resolution (HR) images from low-resolution (LR) images typically acquired with lower scanning times. However, most existing SR approaches depend on supervised DL techniques, which require training ground truth (GT) HR data, which is often difficult to acquire and simultaneously sets a bound for how far SR can go. In this paper, we introduce a novel self-supervised DL SR model that combines a DL network with an analytical approach and Total Variation (TV) regularization. Our method eliminates the need for external GT images, achieving competitive performance compared to supervised DL techniques and preserving the functional maps.
{"title":"TV-BASED DEEP 3D SELF SUPER-RESOLUTION FOR FMRI.","authors":"Fernando Pérez-Bueno, Hongwei B Li, Matthew S Rosen, Shahin Nasr, César Caballero-Gaudes, Juan E Iglesias","doi":"10.1109/isbi60581.2025.10980709","DOIUrl":"https://doi.org/10.1109/isbi60581.2025.10980709","url":null,"abstract":"<p><p>While functional Magnetic Resonance Imaging (fMRI) offers valuable insights into cognitive processes, its inherent spatial limitations pose challenges for detailed analysis of the fine-grained functional architecture of the brain. More specifically, MRI scanner and sequence specifications impose a trade-off between temporal resolution, spatial resolution, signal-to-noise ratio, and scan time. Deep Learning (DL) Super-Resolution (SR) methods have emerged as a promising solution to enhance fMRI resolution, generating high-resolution (HR) images from low-resolution (LR) images typically acquired with lower scanning times. However, most existing SR approaches depend on supervised DL techniques, which require training ground truth (GT) HR data, which is often difficult to acquire and simultaneously sets a bound for how far SR can go. In this paper, we introduce a novel self-supervised DL SR model that combines a DL network with an analytical approach and Total Variation (TV) regularization. Our method eliminates the need for external GT images, achieving competitive performance compared to supervised DL techniques and preserving the functional maps.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12370177/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144981903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-05-12DOI: 10.1109/isbi60581.2025.10980828
Mariella Kast, Mykhaylo Zayats, Shayan Shafiee, Sergiy Zhuk, Jan S Hesthaven, Amit Joshi
Fluorescence Image Guided Surgery utilizes continuous wave epi-fluorescence measurements on the tissue surface to locate targets such as tumors or lymph nodes, but precise 3D localization of deep targets remains intractable due to the illposedness of the associated inverse problem. We propose a Fluorescence Diffuse Optical Tomography scheme which leverages the different contrast agent kinetics in malignant vs normal tissue and reconstructs the 3D tumor location from a time series of epi-fluorescence measurements. We conduct sequential synthetic experiments, which mimic the differential uptake and release profile of fluorescent dye ICG in tumors vs normal tissue and demonstrate for the first time that the proposed method can robustly recover targets up to 1cm deep and in the presence of realistic tumor-to-background ratios.
{"title":"LEVERAGING CONTRAST AGENT KINETICS FOR ROBUST REFLECTANCE MODE FLUORESCENCE TOMOGRAPHY.","authors":"Mariella Kast, Mykhaylo Zayats, Shayan Shafiee, Sergiy Zhuk, Jan S Hesthaven, Amit Joshi","doi":"10.1109/isbi60581.2025.10980828","DOIUrl":"10.1109/isbi60581.2025.10980828","url":null,"abstract":"<p><p>Fluorescence Image Guided Surgery utilizes continuous wave epi-fluorescence measurements on the tissue surface to locate targets such as tumors or lymph nodes, but precise 3D localization of deep targets remains intractable due to the illposedness of the associated inverse problem. We propose a Fluorescence Diffuse Optical Tomography scheme which leverages the different contrast agent kinetics in malignant vs normal tissue and reconstructs the 3D tumor location from a time series of epi-fluorescence measurements. We conduct sequential synthetic experiments, which mimic the differential uptake and release profile of fluorescent dye ICG in tumors vs normal tissue and demonstrate for the first time that the proposed method can robustly recover targets up to 1cm deep and in the presence of realistic tumor-to-background ratios.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12165278/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144303907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-05-12DOI: 10.1109/isbi60581.2025.10981166
Jiaxin Yue, Jianwei Zhang, Lujia Zhong, Yonggang Shi
The heterogeneity inherent in tau positron emission tomography (PET) imaging data across different tracers challenges the integration of multi-site tau PET data, thereby necessitating the trustful harmonization technique for a better utilization of the emerging large-scale datasets. Unlike other imaging modalities, the harmonization among multi-site tau PET data involves more than intensity mapping but contains intricate pattern alterations attributed to tracer binding properties, which makes the existing statistical methods inadequate. Meanwhile, the effective data preprocessing is required to eliminate the artifacts caused by off-target binding and partial volume effect for meaningful comparison and harmonization. In this paper, we propose a systematic tau PET harmonization framework that involves the surface-based data preprocessing and diffusion model for generating the vertex-wise mapping between multi-site tau standardized uptake value ratio (SUVR) on the cortical surface. In the experiments, using large-scale Alzheimer's Disease Neuroimaging Initiative (ADNI) and Health and Aging Brain Study-Health Disparities (HABS-HD) data with different tracers, we demonstrate our method can successfully achieve harmonization by generating the SUVR maps with consistent pattern distributions and persevering the individual variability.
{"title":"TAU PET HARMONIZATION VIA SURFACE-BASED DIFFUSION MODEL.","authors":"Jiaxin Yue, Jianwei Zhang, Lujia Zhong, Yonggang Shi","doi":"10.1109/isbi60581.2025.10981166","DOIUrl":"https://doi.org/10.1109/isbi60581.2025.10981166","url":null,"abstract":"<p><p>The heterogeneity inherent in tau positron emission tomography (PET) imaging data across different tracers challenges the integration of multi-site tau PET data, thereby necessitating the trustful harmonization technique for a better utilization of the emerging large-scale datasets. Unlike other imaging modalities, the harmonization among multi-site tau PET data involves more than intensity mapping but contains intricate pattern alterations attributed to tracer binding properties, which makes the existing statistical methods inadequate. Meanwhile, the effective data preprocessing is required to eliminate the artifacts caused by off-target binding and partial volume effect for meaningful comparison and harmonization. In this paper, we propose a systematic tau PET harmonization framework that involves the surface-based data preprocessing and diffusion model for generating the vertex-wise mapping between multi-site tau standardized uptake value ratio (SUVR) on the cortical surface. In the experiments, using large-scale Alzheimer's Disease Neuroimaging Initiative (ADNI) and Health and Aging Brain Study-Health Disparities (HABS-HD) data with different tracers, we demonstrate our method can successfully achieve harmonization by generating the SUVR maps with consistent pattern distributions and persevering the individual variability.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12381844/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144981854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-05-12DOI: 10.1109/isbi60581.2025.10980695
Xiaofeng Liu, Yongsong Huang, Thibault Marin, Samira Vafay Eslahi, Amal Tiss, Yanis Chemli, Keith A Johnson, Georges El Fakhri, Jinsong Ouyang
The to-be-denoised positron emission tomography (PET) volumes are inherent with diverse count levels, which imposes challenges for a unified model to tackle varied cases. In this work, we resort to the recently flourished prompt learning to achieve generalizable PET denoising with different count levels. Specifically, we propose dual prompts to guide the PET denoising in a divide-and-conquer manner, i.e., an explicitly count-level prompt to provide the specific prior information and an implicitly general denoising prompt to encode the essential PET denoising knowledge. Then, a novel prompt fusion module is developed to unify the heterogeneous prompts, followed by a prompt-feature interaction module to inject prompts into the features. The prompts are able to dynamically guide the noise-conditioned denoising process. Therefore, we are able to efficiently train a unified denoising model for various count levels, and deploy it to different cases with personalized prompts. We evaluated on 1940 low-count PET 3D volumes with uniformly randomly selected 13-22% fractions of events from 97 18F-MK6240 tau PET studies. It shows our dual prompting can largely improve the performance with informed count-level and outperform the count-conditional model.
待去噪的正电子发射断层扫描(PET)体积具有不同的计数水平,这给统一模型解决各种情况带来了挑战。在这项工作中,我们利用最近蓬勃发展的提示学习来实现不同计数水平的可泛化PET去噪。具体来说,我们提出了双重提示,以分而治之的方式来指导PET去噪,即一个明确的计数级提示来提供特定的先验信息,一个隐式的一般去噪提示来编码基本的PET去噪知识。然后,开发了一种新的提示融合模块来统一异构提示,然后开发了提示-特征交互模块将提示注入到特征中。提示符能够动态地引导噪声条件下的去噪过程。因此,我们能够有效地训练不同计数级别的统一去噪模型,并通过个性化提示将其部署到不同的情况下。我们从97个18F-MK6240 tau PET研究中均匀随机选择13-22%的事件分数,对1940个低计数PET 3D体积进行了评估。结果表明,我们的双提示可以在很大程度上提高知情计数级的性能,并且优于计数条件模型。
{"title":"DUAL PROMPTING FOR DIVERSE COUNT-LEVEL PET DENOISING.","authors":"Xiaofeng Liu, Yongsong Huang, Thibault Marin, Samira Vafay Eslahi, Amal Tiss, Yanis Chemli, Keith A Johnson, Georges El Fakhri, Jinsong Ouyang","doi":"10.1109/isbi60581.2025.10980695","DOIUrl":"10.1109/isbi60581.2025.10980695","url":null,"abstract":"<p><p>The to-be-denoised positron emission tomography (PET) volumes are inherent with diverse count levels, which imposes challenges for a unified model to tackle varied cases. In this work, we resort to the recently flourished prompt learning to achieve generalizable PET denoising with different count levels. Specifically, we propose dual prompts to guide the PET denoising in a divide-and-conquer manner, i.e., an explicitly count-level prompt to provide the specific prior information and an implicitly general denoising prompt to encode the essential PET denoising knowledge. Then, a novel prompt fusion module is developed to unify the heterogeneous prompts, followed by a prompt-feature interaction module to inject prompts into the features. The prompts are able to dynamically guide the noise-conditioned denoising process. Therefore, we are able to efficiently train a unified denoising model for various count levels, and deploy it to different cases with personalized prompts. We evaluated on 1940 low-count PET 3D volumes with uniformly randomly selected 13-22% fractions of events from 97 <sup>18</sup>F-MK6240 tau PET studies. It shows our dual prompting can largely improve the performance with informed count-level and outperform the count-conditional model.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12360122/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144884477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}