Mammography is a primary method for early screening, and developing deep learning-based computer-aided systems is of great significance. However, current deep learning models typically treat each image as an independent entity for diagnosis, rather than integrating images from multiple views to diagnose the patient. These methods do not fully consider and address the complex interactions between different views, resulting in poor diagnostic performance and interpretability. To address this issue, this paper proposes a novel end-to-end framework for breast cancer diagnosis: lesion asymmetry screening assisted global awareness multi-view network (LAS-GAM). More than just the most common image-level diagnostic model, LAS-GAM operates at the patient level, simulating the workflow of radiologists analyzing mammographic images. The framework processes the four views of a patient and revolves around two key modules: a global module and a lesion screening module. The global module simulates the comprehensive assessment by radiologists, integrating complementary information from the craniocaudal (CC) and mediolateral oblique (MLO) views of both breasts to generate global features that represent the patient’s overall condition. The lesion screening module mimics the process of locating lesions by comparing symmetric regions in contralateral views, identifying potential lesion areas and extracting lesion-specific features using a lightweight model. By combining the global features and lesion-specific features, LAS-GAM simulates the diagnostic process, making patient-level predictions. Moreover, it is trained using only patient-level labels, significantly reducing data annotation costs. Experiments on the Digital Database for Screening Mammography (DDSM) and In-house datasets validate LAS-GAM, achieving AUCs of 0.817 and 0.894, respectively.
{"title":"Lesion Asymmetry Screening Assisted Global Awareness Multi-View Network for Mammogram Classification","authors":"Xinchuan Liu;Luhao Sun;Chao Li;Bowen Han;Wenzong Jiang;Tianhao Yuan;Weifeng Liu;Zhaoyun Liu;Zhiyong Yu;Baodi Liu","doi":"10.1109/TMI.2025.3607877","DOIUrl":"10.1109/TMI.2025.3607877","url":null,"abstract":"Mammography is a primary method for early screening, and developing deep learning-based computer-aided systems is of great significance. However, current deep learning models typically treat each image as an independent entity for diagnosis, rather than integrating images from multiple views to diagnose the patient. These methods do not fully consider and address the complex interactions between different views, resulting in poor diagnostic performance and interpretability. To address this issue, this paper proposes a novel end-to-end framework for breast cancer diagnosis: lesion asymmetry screening assisted global awareness multi-view network (LAS-GAM). More than just the most common image-level diagnostic model, LAS-GAM operates at the patient level, simulating the workflow of radiologists analyzing mammographic images. The framework processes the four views of a patient and revolves around two key modules: a global module and a lesion screening module. The global module simulates the comprehensive assessment by radiologists, integrating complementary information from the craniocaudal (CC) and mediolateral oblique (MLO) views of both breasts to generate global features that represent the patient’s overall condition. The lesion screening module mimics the process of locating lesions by comparing symmetric regions in contralateral views, identifying potential lesion areas and extracting lesion-specific features using a lightweight model. By combining the global features and lesion-specific features, LAS-GAM simulates the diagnostic process, making patient-level predictions. Moreover, it is trained using only patient-level labels, significantly reducing data annotation costs. Experiments on the Digital Database for Screening Mammography (DDSM) and In-house datasets validate LAS-GAM, achieving AUCs of 0.817 and 0.894, respectively.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"45 2","pages":"777-788"},"PeriodicalIF":0.0,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145025300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-08DOI: 10.1109/TMI.2025.3607113
Zihao Yuan;Jiaqing Chen;Han Qiu;Houxiang Wang;Yangxin Huang;Fuchun Lin
Analyzing the spontaneous activity of the human brain using dynamic approaches can reveal functional organizations. The co-activation pattern (CAP) analysis of signals from different brain regions is used to characterize brain neural networks that may serve specialized functions. However, CAP is based on spatial information but ignores temporal reproducible transition patterns, and lacks robustness to low signal-to-noise rate (SNR) data. To address these issues, this study proposes a new CAP framework based on hidden semi-Markov model (HSMM) called HSMM-CAP analysis, which can be performed to investigate spatiotemporal CAPs (stCAPs) of the brain. HSMM-CAP uses empirical spatial distributions of stCAPs as emission models, and assumes that the state sequence of stCAPs follows a semi-Markov process. Based on the assumptions of sparsity, heterogeneity, and semi-Markov property of stCAPs, the HSMM-CAP-K-means method is constructed to infer the state sequence and transition parameters of stCAPs. In addition, HSMM-CAP provides the inverse relationship between the number of states and sparsity. Simulation studies verify the performance of HSMM-CAP at different levels of SNR. The spatiotemporal dynamics of stCAPs are also revealed by the proposed method on real-world resting-state fMRI data. Our method provides a new data-driven computational framework for revealing the brain spatiotemporal dynamics of resting-state fMRI data.
{"title":"Co-Activation Pattern Analysis Based on Hidden Semi-Markov Model for Brain Spatiotemporal Dynamics","authors":"Zihao Yuan;Jiaqing Chen;Han Qiu;Houxiang Wang;Yangxin Huang;Fuchun Lin","doi":"10.1109/TMI.2025.3607113","DOIUrl":"10.1109/TMI.2025.3607113","url":null,"abstract":"Analyzing the spontaneous activity of the human brain using dynamic approaches can reveal functional organizations. The co-activation pattern (CAP) analysis of signals from different brain regions is used to characterize brain neural networks that may serve specialized functions. However, CAP is based on spatial information but ignores temporal reproducible transition patterns, and lacks robustness to low signal-to-noise rate (SNR) data. To address these issues, this study proposes a new CAP framework based on hidden semi-Markov model (HSMM) called HSMM-CAP analysis, which can be performed to investigate spatiotemporal CAPs (stCAPs) of the brain. HSMM-CAP uses empirical spatial distributions of stCAPs as emission models, and assumes that the state sequence of stCAPs follows a semi-Markov process. Based on the assumptions of sparsity, heterogeneity, and semi-Markov property of stCAPs, the HSMM-CAP-K-means method is constructed to infer the state sequence and transition parameters of stCAPs. In addition, HSMM-CAP provides the inverse relationship between the number of states and sparsity. Simulation studies verify the performance of HSMM-CAP at different levels of SNR. The spatiotemporal dynamics of stCAPs are also revealed by the proposed method on real-world resting-state fMRI data. Our method provides a new data-driven computational framework for revealing the brain spatiotemporal dynamics of resting-state fMRI data.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"45 2","pages":"843-852"},"PeriodicalIF":0.0,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145017614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-03DOI: 10.1109/TMI.2025.3605617
Weiren Zhao;Lanfeng Zhong;Xin Liao;Wenjun Liao;Sichuan Zhang;Shaoting Zhang;Guotai Wang
Semi-Supervised Learning (SSL) is important for reducing the annotation cost for medical image segmentation models. State-of-the-art SSL methods such as Mean Teacher, FixMatch and Cross Pseudo Supervision (CPS) are mainly based on consistency regularization or pseudo-label supervision between a reference prediction and a supervised prediction. Despite the effectiveness, they have overlooked the potential noise in the labeled data, and mainly focus on strategies to generate the reference prediction, while ignoring the heterogeneous values of different unlabeled pixels. We argue that effectively mining the rich information contained by the two predictions in the loss function, instead of the specific strategy to obtain a reference prediction, is more essential for SSL, and propose a universal framework MetaSSL based on a spatially heterogeneous loss that assigns different weights to pixels by simultaneously leveraging the uncertainty and consistency information between the reference and supervised predictions. Specifically, we split the predictions on unlabeled data into four regions with decreasing weights in the loss: Unanimous and Confident (UC), Unanimous and Suspicious (US), Discrepant and Confident (DC), and Discrepant and Suspicious (DS), where an adaptive threshold is proposed to distinguish confident predictions from suspicious ones. The heterogeneous loss is also applied to labeled images for robust learning considering the potential annotation noise. Our method is plug-and-play and general to most existing SSL methods. The experimental results showed that it improved the segmentation performance significantly when integrated with existing SSL frameworks on different datasets. Code is available at https://github.com/HiLab-git/MetaSSL
{"title":"MetaSSL: A General Heterogeneous Loss for Semi-Supervised Medical Image Segmentation","authors":"Weiren Zhao;Lanfeng Zhong;Xin Liao;Wenjun Liao;Sichuan Zhang;Shaoting Zhang;Guotai Wang","doi":"10.1109/TMI.2025.3605617","DOIUrl":"10.1109/TMI.2025.3605617","url":null,"abstract":"Semi-Supervised Learning (SSL) is important for reducing the annotation cost for medical image segmentation models. State-of-the-art SSL methods such as Mean Teacher, FixMatch and Cross Pseudo Supervision (CPS) are mainly based on consistency regularization or pseudo-label supervision between a reference prediction and a supervised prediction. Despite the effectiveness, they have overlooked the potential noise in the labeled data, and mainly focus on strategies to generate the reference prediction, while ignoring the heterogeneous values of different unlabeled pixels. We argue that effectively mining the rich information contained by the two predictions in the loss function, instead of the specific strategy to obtain a reference prediction, is more essential for SSL, and propose a universal framework <bold>MetaSSL</b> based on a spatially heterogeneous loss that assigns different weights to pixels by simultaneously leveraging the uncertainty and consistency information between the reference and supervised predictions. Specifically, we split the predictions on unlabeled data into four regions with decreasing weights in the loss: Unanimous and Confident (UC), Unanimous and Suspicious (US), Discrepant and Confident (DC), and Discrepant and Suspicious (DS), where an adaptive threshold is proposed to distinguish confident predictions from suspicious ones. The heterogeneous loss is also applied to labeled images for robust learning considering the potential annotation noise. Our method is plug-and-play and general to most existing SSL methods. The experimental results showed that it improved the segmentation performance significantly when integrated with existing SSL frameworks on different datasets. Code is available at <uri>https://github.com/HiLab-git/MetaSSL</uri>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"45 2","pages":"751-763"},"PeriodicalIF":0.0,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144987556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, single-source domain generalization (SDG) has gained popularity in medical image segmentation. As a prominent technique, adversarial image augmentation technique can generate synthetic training data that are challenging for the segmentation model to recognize. To avoid the over-augmentation problem, existing adversarial-based works often employ augmenters with relatively simple structures for medical images, typically operating at the image level, limiting the diversity of the augmented images. In this paper, we propose a Teacher-Student Instance-level Adversarial Augmentation (TSIAA) model for generalized medical image segmentation. The objective of TSIAA is to derive domain-generalizable representations by exploring out-of-source data distributions. First, we construct an Instance-level Image Augmenter (IIAG) using several Instance-level Augmentation Modules (IAMs), which are based on the learnable constrained Bèzier transformation function. Compared to image-level adversarial augmentation, instance-level adversarial augmentation breaks the uniformity of augmentation rules across different structures within an image, thereby providing greater diversity. Then, TSIAA conducts Teacher-Student (TS) learning through an adversarial approach, alternating novel image augmentation and generalized representation learning. The former delves into out-of-source and plausible data, while the latter continuously updates both the student and teacher to ensure the original and augmented features maintain consistent and generalized characteristics. By integrating both strategies, our proposed TSIAA model achieves significant improvements over state-of-the-art methods in four challenging SDG tasks. The code can be accessed at https://github.com/Wangzs0228/TSIAA
{"title":"Teacher–Student Instance-Level Adversarial Augmentation for Single Domain Generalized Medical Image Segmentation","authors":"Zhengshan Wang;Long Chen;Xuelin Xie;Yang Zhang;Yunpeng Cai;Weiping Ding","doi":"10.1109/TMI.2025.3605162","DOIUrl":"10.1109/TMI.2025.3605162","url":null,"abstract":"Recently, single-source domain generalization (SDG) has gained popularity in medical image segmentation. As a prominent technique, adversarial image augmentation technique can generate synthetic training data that are challenging for the segmentation model to recognize. To avoid the over-augmentation problem, existing adversarial-based works often employ augmenters with relatively simple structures for medical images, typically operating at the image level, limiting the diversity of the augmented images. In this paper, we propose a Teacher-Student Instance-level Adversarial Augmentation (TSIAA) model for generalized medical image segmentation. The objective of TSIAA is to derive domain-generalizable representations by exploring out-of-source data distributions. First, we construct an Instance-level Image Augmenter (IIAG) using several Instance-level Augmentation Modules (IAMs), which are based on the learnable constrained Bèzier transformation function. Compared to image-level adversarial augmentation, instance-level adversarial augmentation breaks the uniformity of augmentation rules across different structures within an image, thereby providing greater diversity. Then, TSIAA conducts Teacher-Student (TS) learning through an adversarial approach, alternating novel image augmentation and generalized representation learning. The former delves into out-of-source and plausible data, while the latter continuously updates both the student and teacher to ensure the original and augmented features maintain consistent and generalized characteristics. By integrating both strategies, our proposed TSIAA model achieves significant improvements over state-of-the-art methods in four challenging SDG tasks. The code can be accessed at <uri>https://github.com/Wangzs0228/TSIAA</uri>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"45 2","pages":"764-776"},"PeriodicalIF":0.0,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144930697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Motion estimation of left ventricle myocardium on the cardiac image sequence is crucial for assessing cardiac function. However, the intensity variation of cardiac image sequences brings the challenge of uncertain interference to myocardial motion estimation. Such imaging-related uncertain interference appears in different cardiac imaging modalities. We propose adaptive sequential Bayesian iterative learning to overcome the challenge. Specifically, our method applies the adaptive structural inference to state transition and observation to cope with a complex myocardial motion under uncertain setting. In state transition, adaptive structural inference establishes a hierarchical structure recurrence to obtain the complex latent representation of cardiac image sequences. In state observation, the adaptive structural inference forms a chain structure mapping to correlate the latent representation of the cardiac image sequence with that of the motion. Extensive experiments on US, CMR, and TMR datasets concerning 1270 patients (650 patients for CMR, 500 patients for US and 120 patients for TMR) have shown the effectiveness of our method, as well as the superiority to eight state-of-the-art motion estimation methods.
{"title":"Adaptive Sequential Bayesian Iterative Learning for Myocardial Motion Estimation on Cardiac Image Sequences","authors":"Shuxin Zhuang;Heye Zhang;Dong Liang;Hui Liu;Zhifan Gao","doi":"10.1109/TMI.2025.3599487","DOIUrl":"10.1109/TMI.2025.3599487","url":null,"abstract":"Motion estimation of left ventricle myocardium on the cardiac image sequence is crucial for assessing cardiac function. However, the intensity variation of cardiac image sequences brings the challenge of uncertain interference to myocardial motion estimation. Such imaging-related uncertain interference appears in different cardiac imaging modalities. We propose adaptive sequential Bayesian iterative learning to overcome the challenge. Specifically, our method applies the adaptive structural inference to state transition and observation to cope with a complex myocardial motion under uncertain setting. In state transition, adaptive structural inference establishes a hierarchical structure recurrence to obtain the complex latent representation of cardiac image sequences. In state observation, the adaptive structural inference forms a chain structure mapping to correlate the latent representation of the cardiac image sequence with that of the motion. Extensive experiments on US, CMR, and TMR datasets concerning 1270 patients (650 patients for CMR, 500 patients for US and 120 patients for TMR) have shown the effectiveness of our method, as well as the superiority to eight state-of-the-art motion estimation methods.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"45 1","pages":"406-420"},"PeriodicalIF":0.0,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144877629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automatic anatomical localization is critical for radiology report generation. While many studies focus on lesion detection and segmentation, anatomical localization—accurately describing lesion positions in radiology reports—has received less attention. Conventional segmentation-based methods are limited to organ-level localization and often fail in severe disease cases due to low segmentation accuracy. To address these limitations, we reformulate anatomical localization as an image-to-text retrieval task. Specifically, we propose a CLIP-based framework that aligns lesion image patches with anatomically descriptive text embeddings in a shared multimodal space. By projecting lesion features into the semantic space and retrieving the most relevant anatomical descriptions in a coarse-to-fine manner, our method achieves fine-grained lesion localization with high accuracy across the entire body. Our main contributions are as follows: (1) hierarchical anatomical retrieval, which organizes 387 locations into a two-level hierarchy, by retrieving from the first level of 124 coarse categories to narrow down the search space and reduce localization complexity; (2) augmented location descriptions, which integrate domain-specific anatomical knowledge for enhancing semantic representation and improving visual—text alignment; and (3) semi-hard negative sample mining, which improves training stability and discriminative learning by avoiding selecting the overly similar negative samples that may introduce label noise or semantic ambiguity. We validate our method on two whole-body PET/CT datasets, achieving an 84.13% localization accuracy on the internal test set and 80.42% on the external test set, with a per-lesion inference time of 34 ms. The proposed framework also demonstrated superior robustness in complex clinical cases compared to segmentation-based approaches.
{"title":"Hierarchical Contrastive Learning for Precise Whole-Body Anatomical Localization in PET/CT Imaging","authors":"Yaozong Gao;Yiran Shu;Mingyang Yu;Yanbo Chen;Jingyu Liu;Shaonan Zhong;Weifang Zhang;Yiqiang Zhan;Xiang Sean Zhou;Xinlu Wang;Meixin Zhao;Dinggang Shen","doi":"10.1109/TMI.2025.3599197","DOIUrl":"10.1109/TMI.2025.3599197","url":null,"abstract":"Automatic anatomical localization is critical for radiology report generation. While many studies focus on lesion detection and segmentation, anatomical localization—accurately describing lesion positions in radiology reports—has received less attention. Conventional segmentation-based methods are limited to organ-level localization and often fail in severe disease cases due to low segmentation accuracy. To address these limitations, we reformulate anatomical localization as an image-to-text retrieval task. Specifically, we propose a CLIP-based framework that aligns lesion image patches with anatomically descriptive text embeddings in a shared multimodal space. By projecting lesion features into the semantic space and retrieving the most relevant anatomical descriptions in a coarse-to-fine manner, our method achieves fine-grained lesion localization with high accuracy across the entire body. Our main contributions are as follows: (1) hierarchical anatomical retrieval, which organizes 387 locations into a two-level hierarchy, by retrieving from the first level of 124 coarse categories to narrow down the search space and reduce localization complexity; (2) augmented location descriptions, which integrate domain-specific anatomical knowledge for enhancing semantic representation and improving visual—text alignment; and (3) semi-hard negative sample mining, which improves training stability and discriminative learning by avoiding selecting the overly similar negative samples that may introduce label noise or semantic ambiguity. We validate our method on two whole-body PET/CT datasets, achieving an 84.13% localization accuracy on the internal test set and 80.42% on the external test set, with a per-lesion inference time of 34 ms. The proposed framework also demonstrated superior robustness in complex clinical cases compared to segmentation-based approaches.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"45 1","pages":"391-405"},"PeriodicalIF":0.0,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144877630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-18DOI: 10.1109/TMI.2025.3599937
Domagoj Bošnjak;Gian Marco Melito;Richard Schussnig;Katrin Ellermann;Thomas-Peter Fries
The effects of the aortic geometry on its mechanics and blood flow, and subsequently on aortic pathologies, remain largely unexplored. The main obstacle lies in obtaining patient-specific aorta models, an extremely difficult procedure in terms of ethics and availability, segmentation, mesh generation, and all of the accompanying processes. Contrastingly, idealized models are easy to build but do not faithfully represent patient-specific variability. Additionally, a unified aortic parametrization in clinic and engineering has not yet been achieved. To bridge this gap, we introduce a new set of statistical parameters to generate synthetic models of the aorta. The parameters possess geometric significance and fall within physiological ranges, effectively bridging the disciplines of clinical medicine and engineering. Smoothly blended realistic representations are recovered with convolution surfaces. These enable high-quality visualization and biological appearance, whereas the structured mesh generation paves the way for numerical simulations. The only requirement of the approach is one patient-specific aorta model and the statistical data for parameter values obtained from the literature. The output of this work is SynthAorta, a dataset of ready-to-use synthetic, physiological aorta models, each containing a centerline, surface representation, and a structured hexahedral finite element mesh. The meshes are structured and fully consistent between different cases, making them imminently suitable for reduced order modeling and machine learning approaches.
{"title":"SynthAorta: A 3D Mesh Dataset of Parametrized Physiological Healthy Aortas","authors":"Domagoj Bošnjak;Gian Marco Melito;Richard Schussnig;Katrin Ellermann;Thomas-Peter Fries","doi":"10.1109/TMI.2025.3599937","DOIUrl":"10.1109/TMI.2025.3599937","url":null,"abstract":"The effects of the aortic geometry on its mechanics and blood flow, and subsequently on aortic pathologies, remain largely unexplored. The main obstacle lies in obtaining patient-specific aorta models, an extremely difficult procedure in terms of ethics and availability, segmentation, mesh generation, and all of the accompanying processes. Contrastingly, idealized models are easy to build but do not faithfully represent patient-specific variability. Additionally, a unified aortic parametrization in clinic and engineering has not yet been achieved. To bridge this gap, we introduce a new set of statistical parameters to generate synthetic models of the aorta. The parameters possess geometric significance and fall within physiological ranges, effectively bridging the disciplines of clinical medicine and engineering. Smoothly blended realistic representations are recovered with convolution surfaces. These enable high-quality visualization and biological appearance, whereas the structured mesh generation paves the way for numerical simulations. The only requirement of the approach is one patient-specific aorta model and the statistical data for parameter values obtained from the literature. The output of this work is <italic>SynthAorta</i>, a dataset of ready-to-use synthetic, physiological aorta models, each containing a centerline, surface representation, and a structured hexahedral finite element mesh. The meshes are structured and fully consistent between different cases, making them imminently suitable for reduced order modeling and machine learning approaches.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"45 1","pages":"421-430"},"PeriodicalIF":0.0,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11129067","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144877632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Magnetic resonance imaging (MRI) is powerful in medical diagnostics, yet high-field MRI, despite offering superior image quality, incurs significant costs for procurement, installation, maintenance, and operation, restricting its availability and accessibility, especially in low- and middle-income countries. Addressing this, our study proposes an unsupervised learning algorithm based on cycle-consistent generative adversarial networks. This framework transforms 0.3T low-field MRI into higher-quality 3T-like images, bypassing the need for paired low/high-field training data. The proposed architecture integrates two novel modules to enhance reconstruction quality: (1) an attention block that dynamically balances high-field-like features with the original low-field input, and (2) an edge block that refines boundary details, providing more accurate structural reconstruction. The proposed generative model is trained on large-scale, unpaired, public datasets, and further validated on paired low/high-field acquisitions of three major clinical MRI sequences: T1-weighted, T2-weighted, and fluid-attenuated inversion recovery (FLAIR) imaging. It demonstrates notable improvements in tissue contrast and signal-to-noise ratio while preserving anatomical fidelity. This approach utilizes rich information from publicly available MRI resources, providing a data-efficient unsupervised alternative that complements supervised methods to enhance the utility of low-field MRI.
{"title":"An Unsupervised Learning Approach for Reconstructing 3T-Like Images From 0.3T MRI Without Paired Training Data","authors":"Huaishui Yang;Shaojun Liu;Yilong Liu;Lingyan Zhang;Shoujin Huang;Jiayu Zheng;Jingzhe Liu;Hua Guo;Ed X. Wu;Mengye Lyu","doi":"10.1109/TMI.2025.3597401","DOIUrl":"10.1109/TMI.2025.3597401","url":null,"abstract":"Magnetic resonance imaging (MRI) is powerful in medical diagnostics, yet high-field MRI, despite offering superior image quality, incurs significant costs for procurement, installation, maintenance, and operation, restricting its availability and accessibility, especially in low- and middle-income countries. Addressing this, our study proposes an unsupervised learning algorithm based on cycle-consistent generative adversarial networks. This framework transforms 0.3T low-field MRI into higher-quality 3T-like images, bypassing the need for paired low/high-field training data. The proposed architecture integrates two novel modules to enhance reconstruction quality: (1) an attention block that dynamically balances high-field-like features with the original low-field input, and (2) an edge block that refines boundary details, providing more accurate structural reconstruction. The proposed generative model is trained on large-scale, unpaired, public datasets, and further validated on paired low/high-field acquisitions of three major clinical MRI sequences: T1-weighted, T2-weighted, and fluid-attenuated inversion recovery (FLAIR) imaging. It demonstrates notable improvements in tissue contrast and signal-to-noise ratio while preserving anatomical fidelity. This approach utilizes rich information from publicly available MRI resources, providing a data-efficient unsupervised alternative that complements supervised methods to enhance the utility of low-field MRI.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 12","pages":"5358-5371"},"PeriodicalIF":0.0,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144819720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Unsupervised anomaly detection (UAD) methods typically detect anomalies by learning and reconstructing the normative distribution. However, since anomalies constantly invade and affect their surroundings, sub-healthy areas in the junction present structural deformations that could be easily misidentified as anomalies, posing difficulties for UAD methods that solely learn the normative distribution. The use of multimodal images can facilitate to address the above challenges, as they can provide complementary information of anomalies. Therefore, this paper propose a novel method for UAD in preoperative multimodal images, called Erasure Perception Diffusion model (EPDiff). First, the Local Erasure Progressive Training (LEPT) framework is designed to better rebuild sub-healthy structures around anomalies through the diffusion model with a two-phase process. Initially, healthy images are used to capture deviation features labeled as potential anomalies. Then, these anomalies are locally erased in multimodal images to progressively learn sub-healthy structures, obtaining a more detailed reconstruction around anomalies. Second, the Global Structural Perception (GSP) module is developed in the diffusion model to realize global structural representation and correlation within images and between modalities through interactions of high-level semantic information. In addition, a training-free module, named Multimodal Attention Fusion (MAF) module, is presented for weighted fusion of anomaly maps between different modalities and obtaining binary anomaly outputs. Experimental results show that EPDiff improves the AUPRC and mDice scores by 2% and 3.9% on BraTS2021, and by 5.2% and 4.5% on Shifts over the state-of-the-art methods, which proves the applicability of EPDiff in diverse anomaly diagnosis. The code is available at https://github.com/wjiazheng/EPDiff
{"title":"EPDiff: Erasure Perception Diffusion Model for Unsupervised Anomaly Detection in Preoperative Multimodal Images","authors":"Jiazheng Wang;Min Liu;Wenting Shen;Renjie Ding;Yaonan Wang;Erik Meijering","doi":"10.1109/TMI.2025.3597545","DOIUrl":"10.1109/TMI.2025.3597545","url":null,"abstract":"Unsupervised anomaly detection (UAD) methods typically detect anomalies by learning and reconstructing the normative distribution. However, since anomalies constantly invade and affect their surroundings, sub-healthy areas in the junction present structural deformations that could be easily misidentified as anomalies, posing difficulties for UAD methods that solely learn the normative distribution. The use of multimodal images can facilitate to address the above challenges, as they can provide complementary information of anomalies. Therefore, this paper propose a novel method for UAD in preoperative multimodal images, called Erasure Perception Diffusion model (EPDiff). First, the Local Erasure Progressive Training (LEPT) framework is designed to better rebuild sub-healthy structures around anomalies through the diffusion model with a two-phase process. Initially, healthy images are used to capture deviation features labeled as potential anomalies. Then, these anomalies are locally erased in multimodal images to progressively learn sub-healthy structures, obtaining a more detailed reconstruction around anomalies. Second, the Global Structural Perception (GSP) module is developed in the diffusion model to realize global structural representation and correlation within images and between modalities through interactions of high-level semantic information. In addition, a training-free module, named Multimodal Attention Fusion (MAF) module, is presented for weighted fusion of anomaly maps between different modalities and obtaining binary anomaly outputs. Experimental results show that EPDiff improves the AUPRC and mDice scores by 2% and 3.9% on BraTS2021, and by 5.2% and 4.5% on Shifts over the state-of-the-art methods, which proves the applicability of EPDiff in diverse anomaly diagnosis. The code is available at <uri>https://github.com/wjiazheng/EPDiff</uri>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"45 1","pages":"379-390"},"PeriodicalIF":0.0,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144819772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-08DOI: 10.1109/TMI.2025.3597026
Xiaoyu Zhu;Shiyin Li;HongLiang Bi;Lina Guan;Haiyang Liu;Zhaolin Lu
Choroidal thickness variations serve as critical biomarkers for numerous ophthalmic diseases. Accurate segmentation and quantification of the choroid in optical coherence tomography (OCT) images is essential for clinical diagnosis and disease progression monitoring. Due to the small number of disease types in the public OCT dataset involving changes in choroidal thickness and the lack of a publicly available labeled dataset, we constructed the Xuzhou Municipal Hospital (XZMH)-Choroid dataset. This dataset contains annotated OCT images of normal and eight choroid-related diseases. However, segmentation of the choroid in OCT images remains a formidable challenge due to the confounding factors of blurred boundaries, non-uniform texture, and lesions. To overcome these challenges, we proposed a mixed attention-guided multiscale feature fusion network (MAMFF-Net). This network integrates a Mixed Attention Encoder (MAE) for enhanced fine-grained feature extraction, a deformable multiscale feature fusion path (DMFFP) for adaptive feature integration across lesion deformations, and a multiscale pyramid layer aggregation (MPLA) module for improved contextual representation learning. Through comparative experiments with other deep learning methods, we found that the MAMFF-Net model has better segmentation performance than other deep learning methods (mDice: 97.44, mIoU: 95.11, mAcc: 97.71). Based on the choroidal segmentation implemented in MAMFF-Net, an algorithm for automated choroidal thickness measurement was developed, and the automated measurement results approached the level of senior specialists.
{"title":"Automatic Choroid Segmentation and Thickness Measurement Based on Mixed Attention-Guided Multiscale Feature Fusion Network","authors":"Xiaoyu Zhu;Shiyin Li;HongLiang Bi;Lina Guan;Haiyang Liu;Zhaolin Lu","doi":"10.1109/TMI.2025.3597026","DOIUrl":"10.1109/TMI.2025.3597026","url":null,"abstract":"Choroidal thickness variations serve as critical biomarkers for numerous ophthalmic diseases. Accurate segmentation and quantification of the choroid in optical coherence tomography (OCT) images is essential for clinical diagnosis and disease progression monitoring. Due to the small number of disease types in the public OCT dataset involving changes in choroidal thickness and the lack of a publicly available labeled dataset, we constructed the Xuzhou Municipal Hospital (XZMH)-Choroid dataset. This dataset contains annotated OCT images of normal and eight choroid-related diseases. However, segmentation of the choroid in OCT images remains a formidable challenge due to the confounding factors of blurred boundaries, non-uniform texture, and lesions. To overcome these challenges, we proposed a mixed attention-guided multiscale feature fusion network (MAMFF-Net). This network integrates a Mixed Attention Encoder (MAE) for enhanced fine-grained feature extraction, a deformable multiscale feature fusion path (DMFFP) for adaptive feature integration across lesion deformations, and a multiscale pyramid layer aggregation (MPLA) module for improved contextual representation learning. Through comparative experiments with other deep learning methods, we found that the MAMFF-Net model has better segmentation performance than other deep learning methods (mDice: 97.44, mIoU: 95.11, mAcc: 97.71). Based on the choroidal segmentation implemented in MAMFF-Net, an algorithm for automated choroidal thickness measurement was developed, and the automated measurement results approached the level of senior specialists.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"45 1","pages":"350-363"},"PeriodicalIF":0.0,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144802501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}