Pub Date : 2025-09-10DOI: 10.1109/TMI.2025.3607752
W. Jeffrey Zabel;Héctor Contreras-Sánchez;Warren Foltz;Costel Flueraru;Edward Taylor;Alex Vitkin
Intravoxel Incoherent Motion (IVIM) MRI is a contrast-agent-free microvascular imaging method finding increasing use in biomedicine. However, there is uncertainty in the ability of IVIM-MRI to quantify tissue microvasculature given MRI’s limited spatial resolution (mm scale). Nine NRG mice were subcutaneously inoculated with human pancreatic cancer BxPC-3 cells transfected with DsRed, and MR-compatible plastic window chambers were surgically installed in the dorsal skinfold. Mice were imaged with speckle variance optical coherence tomography (OCT) and colour Doppler OCT, providing high resolution 3D measurements of the vascular volume density (VVD) and average Doppler phase shift ($overline {Delta phi }text {)}$ respectively. IVIM imaging was performed on a 7T preclinical MRI scanner, to generate maps of the perfusion fraction f, the extravascular diffusion coefficient ${D}_{textit {slow}}$ , and the intravascular diffusion coefficient ${D}_{textit {fast}}$ . The IVIM parameter maps were coregistered with the optical datasets to enable direct spatial correlation. A significant positive correlation was noted between OCT’s VVD and MR’s f (Pearson correlation coefficient ${r}={0}.{34},{p}lt {0}.{0001}text {)}$ . Surprisingly, no significant correlation was found between $overline {Delta phi }$ and ${D}_{textit {fast}}$ . This may be due to larger errors in the determined ${D}_{textit {fast}}$ values compared to f, as confirmed by Monte Carlo simulations. Several other inter- and intra-modality correlations were also quantified. Direct same-animal correlation of clinically applicable IVIM imaging with preclinical OCT microvascular imaging support the biomedical relevance of IVIM-MRI metrics, for example through f’s relationship to the VVD.
{"title":"Quantifying Tumor Microvasculature With Optical Coherence Angiography and Intravoxel Incoherent Motion Diffusion MRI","authors":"W. Jeffrey Zabel;Héctor Contreras-Sánchez;Warren Foltz;Costel Flueraru;Edward Taylor;Alex Vitkin","doi":"10.1109/TMI.2025.3607752","DOIUrl":"10.1109/TMI.2025.3607752","url":null,"abstract":"Intravoxel Incoherent Motion (IVIM) MRI is a contrast-agent-free microvascular imaging method finding increasing use in biomedicine. However, there is uncertainty in the ability of IVIM-MRI to quantify tissue microvasculature given MRI’s limited spatial resolution (mm scale). Nine NRG mice were subcutaneously inoculated with human pancreatic cancer BxPC-3 cells transfected with DsRed, and MR-compatible plastic window chambers were surgically installed in the dorsal skinfold. Mice were imaged with speckle variance optical coherence tomography (OCT) and colour Doppler OCT, providing high resolution 3D measurements of the vascular volume density (VVD) and average Doppler phase shift (<inline-formula> <tex-math>$overline {Delta phi }text {)}$ </tex-math></inline-formula> respectively. IVIM imaging was performed on a 7T preclinical MRI scanner, to generate maps of the perfusion fraction f, the extravascular diffusion coefficient <inline-formula> <tex-math>${D}_{textit {slow}}$ </tex-math></inline-formula>, and the intravascular diffusion coefficient <inline-formula> <tex-math>${D}_{textit {fast}}$ </tex-math></inline-formula>. The IVIM parameter maps were coregistered with the optical datasets to enable direct spatial correlation. A significant positive correlation was noted between OCT’s VVD and MR’s f (Pearson correlation coefficient <inline-formula> <tex-math>${r}={0}.{34},{p}lt {0}.{0001}text {)}$ </tex-math></inline-formula>. Surprisingly, no significant correlation was found between <inline-formula> <tex-math>$overline {Delta phi }$ </tex-math></inline-formula> and <inline-formula> <tex-math>${D}_{textit {fast}}$ </tex-math></inline-formula>. This may be due to larger errors in the determined <inline-formula> <tex-math>${D}_{textit {fast}}$ </tex-math></inline-formula> values compared to f, as confirmed by Monte Carlo simulations. Several other inter- and intra-modality correlations were also quantified. Direct same-animal correlation of clinically applicable IVIM imaging with preclinical OCT microvascular imaging support the biomedical relevance of IVIM-MRI metrics, for example through f’s relationship to the VVD.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"45 2","pages":"789-798"},"PeriodicalIF":0.0,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145031941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-09DOI: 10.1109/TMI.2025.3607700
Xiaoru Gao;Housheng Xie;Donghua Hang;Guoyan Zheng
Computed Tomography (CT) to Cone-Beam Computed Tomography (CBCT) image registration is crucial for image-guided radiotherapy and surgical procedures. However, achieving accurate CT-CBCT registration remains challenging due to various factors such as inconsistent intensities, low contrast resolution and imaging artifacts. In this study, we propose a Context-Aware Semantics-driven Hierarchical Network (referred to as CASHNet), which hierarchically integrates context-aware semantics-encoded features into a coarse-to-fine registration scheme, to explicitly enhance semantic structural perception during progressive alignment. Moreover, it leverages diffeomorphisms to integrate rigid and non-rigid registration within a single end-to-end trainable network, enabling anatomically plausible deformations and preserving topological consistency. CASHNet comprises a Siamese Mamba-based multi-scale feature encoder and a coarse-to-fine registration decoder, which integrates a Rigid Registration (RR) module with multiple Semantics-guided Velocity Estimation and Feature Alignment (SVEFA) modules operating at different resolutions. Each SVEFA module comprises three carefully designed components: i) a cross-resolution feature aggregation (CFA) component that synthesizes enhanced global contextual representations, ii) a semantics perception and encoding (SPE) component that captures and encodes local semantic information, and iii) an incremental velocity estimation and feature alignment (IVEFA) component that leverages contextual and semantic features to update velocity fields and to align features. These modules work synergistically to boost the overall registration performance. Extensive experiments on three typical yet challenging CT-CBCT datasets of both soft and hard tissues demonstrate the superiority of our proposed method over other state-of-the-art methods. The code will be publicly available at https://github.com/xiaorugao999/CASHNet
{"title":"CASHNet: Context-Aware Semantics-Driven Hierarchical Network for Hybrid Diffeomorphic CT-CBCT Image Registration","authors":"Xiaoru Gao;Housheng Xie;Donghua Hang;Guoyan Zheng","doi":"10.1109/TMI.2025.3607700","DOIUrl":"10.1109/TMI.2025.3607700","url":null,"abstract":"Computed Tomography (CT) to Cone-Beam Computed Tomography (CBCT) image registration is crucial for image-guided radiotherapy and surgical procedures. However, achieving accurate CT-CBCT registration remains challenging due to various factors such as inconsistent intensities, low contrast resolution and imaging artifacts. In this study, we propose a Context-Aware Semantics-driven Hierarchical Network (referred to as CASHNet), which hierarchically integrates context-aware semantics-encoded features into a coarse-to-fine registration scheme, to explicitly enhance semantic structural perception during progressive alignment. Moreover, it leverages diffeomorphisms to integrate rigid and non-rigid registration within a single end-to-end trainable network, enabling anatomically plausible deformations and preserving topological consistency. CASHNet comprises a Siamese Mamba-based multi-scale feature encoder and a coarse-to-fine registration decoder, which integrates a Rigid Registration (RR) module with multiple Semantics-guided Velocity Estimation and Feature Alignment (SVEFA) modules operating at different resolutions. Each SVEFA module comprises three carefully designed components: i) a cross-resolution feature aggregation (CFA) component that synthesizes enhanced global contextual representations, ii) a semantics perception and encoding (SPE) component that captures and encodes local semantic information, and iii) an incremental velocity estimation and feature alignment (IVEFA) component that leverages contextual and semantic features to update velocity fields and to align features. These modules work synergistically to boost the overall registration performance. Extensive experiments on three typical yet challenging CT-CBCT datasets of both soft and hard tissues demonstrate the superiority of our proposed method over other state-of-the-art methods. The code will be publicly available at <uri>https://github.com/xiaorugao999/CASHNet</uri>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"45 2","pages":"825-842"},"PeriodicalIF":0.0,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145025299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Motion estimation of left ventricle myocardium on the cardiac image sequence is crucial for assessing cardiac function. However, the intensity variation of cardiac image sequences brings the challenge of uncertain interference to myocardial motion estimation. Such imaging-related uncertain interference appears in different cardiac imaging modalities. We propose adaptive sequential Bayesian iterative learning to overcome the challenge. Specifically, our method applies the adaptive structural inference to state transition and observation to cope with a complex myocardial motion under uncertain setting. In state transition, adaptive structural inference establishes a hierarchical structure recurrence to obtain the complex latent representation of cardiac image sequences. In state observation, the adaptive structural inference forms a chain structure mapping to correlate the latent representation of the cardiac image sequence with that of the motion. Extensive experiments on US, CMR, and TMR datasets concerning 1270 patients (650 patients for CMR, 500 patients for US and 120 patients for TMR) have shown the effectiveness of our method, as well as the superiority to eight state-of-the-art motion estimation methods.
{"title":"Adaptive Sequential Bayesian Iterative Learning for Myocardial Motion Estimation on Cardiac Image Sequences","authors":"Shuxin Zhuang;Heye Zhang;Dong Liang;Hui Liu;Zhifan Gao","doi":"10.1109/TMI.2025.3599487","DOIUrl":"10.1109/TMI.2025.3599487","url":null,"abstract":"Motion estimation of left ventricle myocardium on the cardiac image sequence is crucial for assessing cardiac function. However, the intensity variation of cardiac image sequences brings the challenge of uncertain interference to myocardial motion estimation. Such imaging-related uncertain interference appears in different cardiac imaging modalities. We propose adaptive sequential Bayesian iterative learning to overcome the challenge. Specifically, our method applies the adaptive structural inference to state transition and observation to cope with a complex myocardial motion under uncertain setting. In state transition, adaptive structural inference establishes a hierarchical structure recurrence to obtain the complex latent representation of cardiac image sequences. In state observation, the adaptive structural inference forms a chain structure mapping to correlate the latent representation of the cardiac image sequence with that of the motion. Extensive experiments on US, CMR, and TMR datasets concerning 1270 patients (650 patients for CMR, 500 patients for US and 120 patients for TMR) have shown the effectiveness of our method, as well as the superiority to eight state-of-the-art motion estimation methods.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"45 1","pages":"406-420"},"PeriodicalIF":0.0,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144877629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automatic anatomical localization is critical for radiology report generation. While many studies focus on lesion detection and segmentation, anatomical localization—accurately describing lesion positions in radiology reports—has received less attention. Conventional segmentation-based methods are limited to organ-level localization and often fail in severe disease cases due to low segmentation accuracy. To address these limitations, we reformulate anatomical localization as an image-to-text retrieval task. Specifically, we propose a CLIP-based framework that aligns lesion image patches with anatomically descriptive text embeddings in a shared multimodal space. By projecting lesion features into the semantic space and retrieving the most relevant anatomical descriptions in a coarse-to-fine manner, our method achieves fine-grained lesion localization with high accuracy across the entire body. Our main contributions are as follows: (1) hierarchical anatomical retrieval, which organizes 387 locations into a two-level hierarchy, by retrieving from the first level of 124 coarse categories to narrow down the search space and reduce localization complexity; (2) augmented location descriptions, which integrate domain-specific anatomical knowledge for enhancing semantic representation and improving visual—text alignment; and (3) semi-hard negative sample mining, which improves training stability and discriminative learning by avoiding selecting the overly similar negative samples that may introduce label noise or semantic ambiguity. We validate our method on two whole-body PET/CT datasets, achieving an 84.13% localization accuracy on the internal test set and 80.42% on the external test set, with a per-lesion inference time of 34 ms. The proposed framework also demonstrated superior robustness in complex clinical cases compared to segmentation-based approaches.
{"title":"Hierarchical Contrastive Learning for Precise Whole-Body Anatomical Localization in PET/CT Imaging","authors":"Yaozong Gao;Yiran Shu;Mingyang Yu;Yanbo Chen;Jingyu Liu;Shaonan Zhong;Weifang Zhang;Yiqiang Zhan;Xiang Sean Zhou;Xinlu Wang;Meixin Zhao;Dinggang Shen","doi":"10.1109/TMI.2025.3599197","DOIUrl":"10.1109/TMI.2025.3599197","url":null,"abstract":"Automatic anatomical localization is critical for radiology report generation. While many studies focus on lesion detection and segmentation, anatomical localization—accurately describing lesion positions in radiology reports—has received less attention. Conventional segmentation-based methods are limited to organ-level localization and often fail in severe disease cases due to low segmentation accuracy. To address these limitations, we reformulate anatomical localization as an image-to-text retrieval task. Specifically, we propose a CLIP-based framework that aligns lesion image patches with anatomically descriptive text embeddings in a shared multimodal space. By projecting lesion features into the semantic space and retrieving the most relevant anatomical descriptions in a coarse-to-fine manner, our method achieves fine-grained lesion localization with high accuracy across the entire body. Our main contributions are as follows: (1) hierarchical anatomical retrieval, which organizes 387 locations into a two-level hierarchy, by retrieving from the first level of 124 coarse categories to narrow down the search space and reduce localization complexity; (2) augmented location descriptions, which integrate domain-specific anatomical knowledge for enhancing semantic representation and improving visual—text alignment; and (3) semi-hard negative sample mining, which improves training stability and discriminative learning by avoiding selecting the overly similar negative samples that may introduce label noise or semantic ambiguity. We validate our method on two whole-body PET/CT datasets, achieving an 84.13% localization accuracy on the internal test set and 80.42% on the external test set, with a per-lesion inference time of 34 ms. The proposed framework also demonstrated superior robustness in complex clinical cases compared to segmentation-based approaches.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"45 1","pages":"391-405"},"PeriodicalIF":0.0,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144877630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-18DOI: 10.1109/TMI.2025.3599937
Domagoj Bošnjak;Gian Marco Melito;Richard Schussnig;Katrin Ellermann;Thomas-Peter Fries
The effects of the aortic geometry on its mechanics and blood flow, and subsequently on aortic pathologies, remain largely unexplored. The main obstacle lies in obtaining patient-specific aorta models, an extremely difficult procedure in terms of ethics and availability, segmentation, mesh generation, and all of the accompanying processes. Contrastingly, idealized models are easy to build but do not faithfully represent patient-specific variability. Additionally, a unified aortic parametrization in clinic and engineering has not yet been achieved. To bridge this gap, we introduce a new set of statistical parameters to generate synthetic models of the aorta. The parameters possess geometric significance and fall within physiological ranges, effectively bridging the disciplines of clinical medicine and engineering. Smoothly blended realistic representations are recovered with convolution surfaces. These enable high-quality visualization and biological appearance, whereas the structured mesh generation paves the way for numerical simulations. The only requirement of the approach is one patient-specific aorta model and the statistical data for parameter values obtained from the literature. The output of this work is SynthAorta, a dataset of ready-to-use synthetic, physiological aorta models, each containing a centerline, surface representation, and a structured hexahedral finite element mesh. The meshes are structured and fully consistent between different cases, making them imminently suitable for reduced order modeling and machine learning approaches.
{"title":"SynthAorta: A 3D Mesh Dataset of Parametrized Physiological Healthy Aortas","authors":"Domagoj Bošnjak;Gian Marco Melito;Richard Schussnig;Katrin Ellermann;Thomas-Peter Fries","doi":"10.1109/TMI.2025.3599937","DOIUrl":"10.1109/TMI.2025.3599937","url":null,"abstract":"The effects of the aortic geometry on its mechanics and blood flow, and subsequently on aortic pathologies, remain largely unexplored. The main obstacle lies in obtaining patient-specific aorta models, an extremely difficult procedure in terms of ethics and availability, segmentation, mesh generation, and all of the accompanying processes. Contrastingly, idealized models are easy to build but do not faithfully represent patient-specific variability. Additionally, a unified aortic parametrization in clinic and engineering has not yet been achieved. To bridge this gap, we introduce a new set of statistical parameters to generate synthetic models of the aorta. The parameters possess geometric significance and fall within physiological ranges, effectively bridging the disciplines of clinical medicine and engineering. Smoothly blended realistic representations are recovered with convolution surfaces. These enable high-quality visualization and biological appearance, whereas the structured mesh generation paves the way for numerical simulations. The only requirement of the approach is one patient-specific aorta model and the statistical data for parameter values obtained from the literature. The output of this work is <italic>SynthAorta</i>, a dataset of ready-to-use synthetic, physiological aorta models, each containing a centerline, surface representation, and a structured hexahedral finite element mesh. The meshes are structured and fully consistent between different cases, making them imminently suitable for reduced order modeling and machine learning approaches.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"45 1","pages":"421-430"},"PeriodicalIF":0.0,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11129067","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144877632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Magnetic resonance imaging (MRI) is powerful in medical diagnostics, yet high-field MRI, despite offering superior image quality, incurs significant costs for procurement, installation, maintenance, and operation, restricting its availability and accessibility, especially in low- and middle-income countries. Addressing this, our study proposes an unsupervised learning algorithm based on cycle-consistent generative adversarial networks. This framework transforms 0.3T low-field MRI into higher-quality 3T-like images, bypassing the need for paired low/high-field training data. The proposed architecture integrates two novel modules to enhance reconstruction quality: (1) an attention block that dynamically balances high-field-like features with the original low-field input, and (2) an edge block that refines boundary details, providing more accurate structural reconstruction. The proposed generative model is trained on large-scale, unpaired, public datasets, and further validated on paired low/high-field acquisitions of three major clinical MRI sequences: T1-weighted, T2-weighted, and fluid-attenuated inversion recovery (FLAIR) imaging. It demonstrates notable improvements in tissue contrast and signal-to-noise ratio while preserving anatomical fidelity. This approach utilizes rich information from publicly available MRI resources, providing a data-efficient unsupervised alternative that complements supervised methods to enhance the utility of low-field MRI.
{"title":"An Unsupervised Learning Approach for Reconstructing 3T-Like Images From 0.3T MRI Without Paired Training Data","authors":"Huaishui Yang;Shaojun Liu;Yilong Liu;Lingyan Zhang;Shoujin Huang;Jiayu Zheng;Jingzhe Liu;Hua Guo;Ed X. Wu;Mengye Lyu","doi":"10.1109/TMI.2025.3597401","DOIUrl":"10.1109/TMI.2025.3597401","url":null,"abstract":"Magnetic resonance imaging (MRI) is powerful in medical diagnostics, yet high-field MRI, despite offering superior image quality, incurs significant costs for procurement, installation, maintenance, and operation, restricting its availability and accessibility, especially in low- and middle-income countries. Addressing this, our study proposes an unsupervised learning algorithm based on cycle-consistent generative adversarial networks. This framework transforms 0.3T low-field MRI into higher-quality 3T-like images, bypassing the need for paired low/high-field training data. The proposed architecture integrates two novel modules to enhance reconstruction quality: (1) an attention block that dynamically balances high-field-like features with the original low-field input, and (2) an edge block that refines boundary details, providing more accurate structural reconstruction. The proposed generative model is trained on large-scale, unpaired, public datasets, and further validated on paired low/high-field acquisitions of three major clinical MRI sequences: T1-weighted, T2-weighted, and fluid-attenuated inversion recovery (FLAIR) imaging. It demonstrates notable improvements in tissue contrast and signal-to-noise ratio while preserving anatomical fidelity. This approach utilizes rich information from publicly available MRI resources, providing a data-efficient unsupervised alternative that complements supervised methods to enhance the utility of low-field MRI.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 12","pages":"5358-5371"},"PeriodicalIF":0.0,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144819720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Unsupervised anomaly detection (UAD) methods typically detect anomalies by learning and reconstructing the normative distribution. However, since anomalies constantly invade and affect their surroundings, sub-healthy areas in the junction present structural deformations that could be easily misidentified as anomalies, posing difficulties for UAD methods that solely learn the normative distribution. The use of multimodal images can facilitate to address the above challenges, as they can provide complementary information of anomalies. Therefore, this paper propose a novel method for UAD in preoperative multimodal images, called Erasure Perception Diffusion model (EPDiff). First, the Local Erasure Progressive Training (LEPT) framework is designed to better rebuild sub-healthy structures around anomalies through the diffusion model with a two-phase process. Initially, healthy images are used to capture deviation features labeled as potential anomalies. Then, these anomalies are locally erased in multimodal images to progressively learn sub-healthy structures, obtaining a more detailed reconstruction around anomalies. Second, the Global Structural Perception (GSP) module is developed in the diffusion model to realize global structural representation and correlation within images and between modalities through interactions of high-level semantic information. In addition, a training-free module, named Multimodal Attention Fusion (MAF) module, is presented for weighted fusion of anomaly maps between different modalities and obtaining binary anomaly outputs. Experimental results show that EPDiff improves the AUPRC and mDice scores by 2% and 3.9% on BraTS2021, and by 5.2% and 4.5% on Shifts over the state-of-the-art methods, which proves the applicability of EPDiff in diverse anomaly diagnosis. The code is available at https://github.com/wjiazheng/EPDiff
{"title":"EPDiff: Erasure Perception Diffusion Model for Unsupervised Anomaly Detection in Preoperative Multimodal Images","authors":"Jiazheng Wang;Min Liu;Wenting Shen;Renjie Ding;Yaonan Wang;Erik Meijering","doi":"10.1109/TMI.2025.3597545","DOIUrl":"10.1109/TMI.2025.3597545","url":null,"abstract":"Unsupervised anomaly detection (UAD) methods typically detect anomalies by learning and reconstructing the normative distribution. However, since anomalies constantly invade and affect their surroundings, sub-healthy areas in the junction present structural deformations that could be easily misidentified as anomalies, posing difficulties for UAD methods that solely learn the normative distribution. The use of multimodal images can facilitate to address the above challenges, as they can provide complementary information of anomalies. Therefore, this paper propose a novel method for UAD in preoperative multimodal images, called Erasure Perception Diffusion model (EPDiff). First, the Local Erasure Progressive Training (LEPT) framework is designed to better rebuild sub-healthy structures around anomalies through the diffusion model with a two-phase process. Initially, healthy images are used to capture deviation features labeled as potential anomalies. Then, these anomalies are locally erased in multimodal images to progressively learn sub-healthy structures, obtaining a more detailed reconstruction around anomalies. Second, the Global Structural Perception (GSP) module is developed in the diffusion model to realize global structural representation and correlation within images and between modalities through interactions of high-level semantic information. In addition, a training-free module, named Multimodal Attention Fusion (MAF) module, is presented for weighted fusion of anomaly maps between different modalities and obtaining binary anomaly outputs. Experimental results show that EPDiff improves the AUPRC and mDice scores by 2% and 3.9% on BraTS2021, and by 5.2% and 4.5% on Shifts over the state-of-the-art methods, which proves the applicability of EPDiff in diverse anomaly diagnosis. The code is available at <uri>https://github.com/wjiazheng/EPDiff</uri>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"45 1","pages":"379-390"},"PeriodicalIF":0.0,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144819772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-08DOI: 10.1109/TMI.2025.3597026
Xiaoyu Zhu;Shiyin Li;HongLiang Bi;Lina Guan;Haiyang Liu;Zhaolin Lu
Choroidal thickness variations serve as critical biomarkers for numerous ophthalmic diseases. Accurate segmentation and quantification of the choroid in optical coherence tomography (OCT) images is essential for clinical diagnosis and disease progression monitoring. Due to the small number of disease types in the public OCT dataset involving changes in choroidal thickness and the lack of a publicly available labeled dataset, we constructed the Xuzhou Municipal Hospital (XZMH)-Choroid dataset. This dataset contains annotated OCT images of normal and eight choroid-related diseases. However, segmentation of the choroid in OCT images remains a formidable challenge due to the confounding factors of blurred boundaries, non-uniform texture, and lesions. To overcome these challenges, we proposed a mixed attention-guided multiscale feature fusion network (MAMFF-Net). This network integrates a Mixed Attention Encoder (MAE) for enhanced fine-grained feature extraction, a deformable multiscale feature fusion path (DMFFP) for adaptive feature integration across lesion deformations, and a multiscale pyramid layer aggregation (MPLA) module for improved contextual representation learning. Through comparative experiments with other deep learning methods, we found that the MAMFF-Net model has better segmentation performance than other deep learning methods (mDice: 97.44, mIoU: 95.11, mAcc: 97.71). Based on the choroidal segmentation implemented in MAMFF-Net, an algorithm for automated choroidal thickness measurement was developed, and the automated measurement results approached the level of senior specialists.
{"title":"Automatic Choroid Segmentation and Thickness Measurement Based on Mixed Attention-Guided Multiscale Feature Fusion Network","authors":"Xiaoyu Zhu;Shiyin Li;HongLiang Bi;Lina Guan;Haiyang Liu;Zhaolin Lu","doi":"10.1109/TMI.2025.3597026","DOIUrl":"10.1109/TMI.2025.3597026","url":null,"abstract":"Choroidal thickness variations serve as critical biomarkers for numerous ophthalmic diseases. Accurate segmentation and quantification of the choroid in optical coherence tomography (OCT) images is essential for clinical diagnosis and disease progression monitoring. Due to the small number of disease types in the public OCT dataset involving changes in choroidal thickness and the lack of a publicly available labeled dataset, we constructed the Xuzhou Municipal Hospital (XZMH)-Choroid dataset. This dataset contains annotated OCT images of normal and eight choroid-related diseases. However, segmentation of the choroid in OCT images remains a formidable challenge due to the confounding factors of blurred boundaries, non-uniform texture, and lesions. To overcome these challenges, we proposed a mixed attention-guided multiscale feature fusion network (MAMFF-Net). This network integrates a Mixed Attention Encoder (MAE) for enhanced fine-grained feature extraction, a deformable multiscale feature fusion path (DMFFP) for adaptive feature integration across lesion deformations, and a multiscale pyramid layer aggregation (MPLA) module for improved contextual representation learning. Through comparative experiments with other deep learning methods, we found that the MAMFF-Net model has better segmentation performance than other deep learning methods (mDice: 97.44, mIoU: 95.11, mAcc: 97.71). Based on the choroidal segmentation implemented in MAMFF-Net, an algorithm for automated choroidal thickness measurement was developed, and the automated measurement results approached the level of senior specialists.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"45 1","pages":"350-363"},"PeriodicalIF":0.0,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144802501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Unsupervised brain lesion segmentation, focusing on learning normative distributions from images of healthy subjects, are less dependent on lesion-labeled data, thus exhibiting better generalization capabilities. A fundamental challenge in learning normative distributions of images lies in the high dimensionality if image pixels are treated as correlated random variables to capture spatial dependence. In this study, we proposed a subspace-based deep generative model to learn the posterior normal distributions. Specifically, we used probabilistic subspace models to capture spatial-intensity distributions and spatial-structure distributions of brain images from healthy subjects. These models captured prior spatial-intensity and spatial-structure variations effectively by treating the subspace coefficients as random variables with basis functions being the eigen-images and eigen-density functions learned from the training data. These prior distributions were then converted to posterior distributions, including both the posterior normal and posterior lesion distributions for a given image using the subspace-based generative model and subspace-assisted Bayesian analysis, respectively. Finally, an unsupervised fusion classifier was used to combine the posterior and likelihood features for lesion segmentation. The proposed method has been evaluated on simulated and real lesion data, including tumor, multiple sclerosis, and stroke, demonstrating superior segmentation accuracy and robustness over the state-of-the-art methods. Our proposed method holds promise for enhancing unsupervised brain lesion delineation in clinical applications.
{"title":"Unsupervised Brain Lesion Segmentation Using Posterior Distributions Learned by Subspace-Based Generative Model","authors":"Huixiang Zhuang;Yue Guan;Yi Ding;Chang Xu;Zijun Cheng;Yuhao Ma;Ruihao Liu;Ziyu Meng;Li Cao;Yao Li;Zhi-Pei Liang","doi":"10.1109/TMI.2025.3597080","DOIUrl":"10.1109/TMI.2025.3597080","url":null,"abstract":"Unsupervised brain lesion segmentation, focusing on learning normative distributions from images of healthy subjects, are less dependent on lesion-labeled data, thus exhibiting better generalization capabilities. A fundamental challenge in learning normative distributions of images lies in the high dimensionality if image pixels are treated as correlated random variables to capture spatial dependence. In this study, we proposed a subspace-based deep generative model to learn the posterior normal distributions. Specifically, we used probabilistic subspace models to capture spatial-intensity distributions and spatial-structure distributions of brain images from healthy subjects. These models captured prior spatial-intensity and spatial-structure variations effectively by treating the subspace coefficients as random variables with basis functions being the eigen-images and eigen-density functions learned from the training data. These prior distributions were then converted to posterior distributions, including both the posterior normal and posterior lesion distributions for a given image using the subspace-based generative model and subspace-assisted Bayesian analysis, respectively. Finally, an unsupervised fusion classifier was used to combine the posterior and likelihood features for lesion segmentation. The proposed method has been evaluated on simulated and real lesion data, including tumor, multiple sclerosis, and stroke, demonstrating superior segmentation accuracy and robustness over the state-of-the-art methods. Our proposed method holds promise for enhancing unsupervised brain lesion delineation in clinical applications.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"45 1","pages":"364-378"},"PeriodicalIF":0.0,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144802499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computed tomography (CT) is one of the most widely used non-invasive imaging modalities for medical diagnosis. In clinical practice, CT images are usually acquired with large slice thicknesses due to the high cost of memory storage and operation time, resulting in an anisotropic CT volume with much lower inter-slice resolution than in-plane resolution. Since such inconsistent resolution may lead to difficulties in disease diagnosis, deep learning-based volumetric super-resolution methods have been developed to improve inter-slice resolution. Most existing methods conduct single-image super-resolution on the through-plane or synthesize intermediate slices from adjacent slices; however, the anisotropic characteristic of 3D CT volume has not been well explored. In this paper, we propose a novel cross-view texture transfer approach for CT slice interpolation by fully utilizing the anisotropic nature of 3D CT volume. Specifically, we design a unique framework that takes high-resolution in-plane texture details as a reference and transfers them to low-resolution through-plane images. To this end, we introduce a multi-reference non-local attention module that extracts meaningful features for reconstructing through-plane high-frequency details from multiple in-plane images. Through extensive experiments, we demonstrate that our method performs significantly better in CT slice interpolation than existing competing methods on public CT datasets including a real-paired benchmark, verifying the effectiveness of the proposed framework. The source code of this work is available at https://github.com/khuhm/ACVTT
{"title":"An Anisotropic Cross-View Texture Transfer With Multi-Reference Non-Local Attention for CT Slice Interpolation","authors":"Kwang-Hyun Uhm;Hyunjun Cho;Sung-Hoo Hong;Seung-Won Jung","doi":"10.1109/TMI.2025.3596957","DOIUrl":"10.1109/TMI.2025.3596957","url":null,"abstract":"Computed tomography (CT) is one of the most widely used non-invasive imaging modalities for medical diagnosis. In clinical practice, CT images are usually acquired with large slice thicknesses due to the high cost of memory storage and operation time, resulting in an anisotropic CT volume with much lower inter-slice resolution than in-plane resolution. Since such inconsistent resolution may lead to difficulties in disease diagnosis, deep learning-based volumetric super-resolution methods have been developed to improve inter-slice resolution. Most existing methods conduct single-image super-resolution on the through-plane or synthesize intermediate slices from adjacent slices; however, the anisotropic characteristic of 3D CT volume has not been well explored. In this paper, we propose a novel cross-view texture transfer approach for CT slice interpolation by fully utilizing the anisotropic nature of 3D CT volume. Specifically, we design a unique framework that takes high-resolution in-plane texture details as a reference and transfers them to low-resolution through-plane images. To this end, we introduce a multi-reference non-local attention module that extracts meaningful features for reconstructing through-plane high-frequency details from multiple in-plane images. Through extensive experiments, we demonstrate that our method performs significantly better in CT slice interpolation than existing competing methods on public CT datasets including a real-paired benchmark, verifying the effectiveness of the proposed framework. The source code of this work is available at <uri>https://github.com/khuhm/ACVTT</uri>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"45 1","pages":"336-349"},"PeriodicalIF":0.0,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144802503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}