Pub Date : 2025-12-03DOI: 10.1109/tmi.2025.3640076
Hieu T M Nguyen,Neeladrisingha Das,Rohollah Nasiri,Guillem Pratx
Cell tracking is crucial for understanding the complex patterns of cellular migration that underlie many physiological, pathological, and therapeutic processes. Positron emission particle tracking (PEPT) is a method that uses list-mode positron emission tomography (PET) data to localize moving particles non-invasively inside opaque systems. However, while the application of this method to in vivo cell tracking has previously been evoked, its implementation has been limited to tracking one cell at a time. This study investigates the feasibility of tracking multiple cells simultaneously using a recently developed expectation maximization (EM) algorithm called PEPT-EM. The primary challenge to the translation of this algorithm towards biomedical applications is the low radioactivity of the cells being tracked. We experimentally demonstrated the performance of the PEPT-EM algorithm using a preclinical PET scanner for tracking droplets and cells with activities ranging from tens to hundreds of Bq, in phantoms and in a murine model. We found that while background and multiplexing effects impact static source tracking, sensitivity is critical for dynamic tracking of moving sources. We successfully localized multiple single cells in a murine model, moving at speeds up to 25 mm/s, marking the first use of PEPT-EM for such applications. Our findings highlight the exciting potential of PEPT for real-time, high throughput tracking of multiple single cells in vivo, paving the way for studying cell tracking in biological systems.
{"title":"In vivo Positron Emission Particle Tracking (PEPT) of Single Cells Using an Expectation Maximization Algorithm.","authors":"Hieu T M Nguyen,Neeladrisingha Das,Rohollah Nasiri,Guillem Pratx","doi":"10.1109/tmi.2025.3640076","DOIUrl":"https://doi.org/10.1109/tmi.2025.3640076","url":null,"abstract":"Cell tracking is crucial for understanding the complex patterns of cellular migration that underlie many physiological, pathological, and therapeutic processes. Positron emission particle tracking (PEPT) is a method that uses list-mode positron emission tomography (PET) data to localize moving particles non-invasively inside opaque systems. However, while the application of this method to in vivo cell tracking has previously been evoked, its implementation has been limited to tracking one cell at a time. This study investigates the feasibility of tracking multiple cells simultaneously using a recently developed expectation maximization (EM) algorithm called PEPT-EM. The primary challenge to the translation of this algorithm towards biomedical applications is the low radioactivity of the cells being tracked. We experimentally demonstrated the performance of the PEPT-EM algorithm using a preclinical PET scanner for tracking droplets and cells with activities ranging from tens to hundreds of Bq, in phantoms and in a murine model. We found that while background and multiplexing effects impact static source tracking, sensitivity is critical for dynamic tracking of moving sources. We successfully localized multiple single cells in a murine model, moving at speeds up to 25 mm/s, marking the first use of PEPT-EM for such applications. Our findings highlight the exciting potential of PEPT for real-time, high throughput tracking of multiple single cells in vivo, paving the way for studying cell tracking in biological systems.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"7 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145664018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-03DOI: 10.1109/tmi.2025.3639776
Haodong Zhong,Gaiying Li,Yi Wang,Jianqi Li
Quantitative susceptibility mapping (QSM) is a magnetic resonance imaging technique that quantifies tissue magnetic susceptibility by deconvolving the measured signal phase data. Accurate background field removal is essential for QSM, especially in surface regions of the brain, such as the cerebral cortex, where the background field interference is substantial. Existing methods have errors in estimating background field near the boundary of an organ, such as those of the brain, due to assumptions or loss of low-frequency information. A novel Green's function total field inversion (gTFI) method is proposed here to model the background field using integral equations composed of Green's function and boundary conditions, thereby eliminating the need for traditional filtering, assumption or regularization. The gTFI method simultaneously determines the background field at the boundary and the tissue susceptibility from the measured phase data. Numerical simulations and in vivo experiments demonstrate that the gTFI effectively separates the background field and reconstructs whole-brain QSM images without boundary erosion, offering superior performance over existing methods, particularly in cortical regions.
{"title":"Green's Function Total Field Inversion for Quantitative Susceptibility Mapping.","authors":"Haodong Zhong,Gaiying Li,Yi Wang,Jianqi Li","doi":"10.1109/tmi.2025.3639776","DOIUrl":"https://doi.org/10.1109/tmi.2025.3639776","url":null,"abstract":"Quantitative susceptibility mapping (QSM) is a magnetic resonance imaging technique that quantifies tissue magnetic susceptibility by deconvolving the measured signal phase data. Accurate background field removal is essential for QSM, especially in surface regions of the brain, such as the cerebral cortex, where the background field interference is substantial. Existing methods have errors in estimating background field near the boundary of an organ, such as those of the brain, due to assumptions or loss of low-frequency information. A novel Green's function total field inversion (gTFI) method is proposed here to model the background field using integral equations composed of Green's function and boundary conditions, thereby eliminating the need for traditional filtering, assumption or regularization. The gTFI method simultaneously determines the background field at the boundary and the tissue susceptibility from the measured phase data. Numerical simulations and in vivo experiments demonstrate that the gTFI effectively separates the background field and reconstructs whole-brain QSM images without boundary erosion, offering superior performance over existing methods, particularly in cortical regions.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"157 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145663959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
High-quality fundus images are critical for clinical diagnosis, yet real-world acquisition challenges often introduce multi-component degradations. Current deep learning methods typically address single degradations, lacking a unified handling of complex scenarios. In this paper, we propose the Multi-degradation Fundus Image Restoration Network (MFR-Net), an all-in-one restoration framework integrating frequency-aware prompt learning. MFR-Net comprehensively extracts the frequency domain features of different degradation components, and injects them into the backbone network through designed prompt generation and interaction modules. Furthermore, to enhance the model's domain generalization capability, the unsupervised domain adaptation is incorporated into a more reliable perceptual and image quality-oriented space for domain alignment. Extensive experimental results demonstrate that the proposed method outperforms several state-of-the-art models in the restoration of degraded retinal images, especially in the restoration of complex degradations in real images, where the quantitative indicators have been improved by up to 5.42% compared with SOTA algorithms.
{"title":"A Multi-degradation Fundus Image Restoration Network Guided by Frequency Prompt.","authors":"Guang Han,Yaolong Hu,Ning Ding,Shaohua Liu,Linlin Hao,Sam Kwong","doi":"10.1109/tmi.2025.3639308","DOIUrl":"https://doi.org/10.1109/tmi.2025.3639308","url":null,"abstract":"High-quality fundus images are critical for clinical diagnosis, yet real-world acquisition challenges often introduce multi-component degradations. Current deep learning methods typically address single degradations, lacking a unified handling of complex scenarios. In this paper, we propose the Multi-degradation Fundus Image Restoration Network (MFR-Net), an all-in-one restoration framework integrating frequency-aware prompt learning. MFR-Net comprehensively extracts the frequency domain features of different degradation components, and injects them into the backbone network through designed prompt generation and interaction modules. Furthermore, to enhance the model's domain generalization capability, the unsupervised domain adaptation is incorporated into a more reliable perceptual and image quality-oriented space for domain alignment. Extensive experimental results demonstrate that the proposed method outperforms several state-of-the-art models in the restoration of degraded retinal images, especially in the restoration of complex degradations in real images, where the quantitative indicators have been improved by up to 5.42% compared with SOTA algorithms.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"1 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145657051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cone-Beam Computed Tomography (CBCT) provides real-time three-dimensional (3D) imaging support for intraoperative navigation. However, high-attenuation metal implants introduce severe metal artifacts in reconstructed CBCT images. These artifacts compromise image quality and therefore may affect diagnostic accuracy. Current CBCT metal artifact reduction (MAR) algorithms overlook the complementary information available across CBCT views, leading to inaccurate projection-domain interpolation and secondary artifacts in the reconstructed images. To tackle these challenges, we propose a novel Unsupervised Projection-domain Multiview Constraint Learning Network (UPMCL-Net), which directly learns from metal-affected data for CBCT MAR without ground truths. In addition, a transformer-based MultiView Consistency Module (MVCM) is constructed to interpolate the projection-domain metal region for cross-view consistency. Finally, a Hybrid Feature Attention Module (HFAM) is designed to adaptively fuse interview and intraview features. Comprehensive experiments conducted on real clinical datasets confirm the performance of UPMCL-Net, showcasing its potential as an efficient, accurate, and reliable approach for CBCT MAR in clinical intraoperative interventions.
{"title":"UPMCL-Net: Unsupervised Projection-domain Multiview Constraint Learning for CBCT Metal Artifact Reduction.","authors":"Zhan Wu,Yang Yang,Yongjie Guo,Dayang Wang,Tianling Lyu,Yan Xi,Yang Chen,Hengyong Yu","doi":"10.1109/tmi.2025.3638630","DOIUrl":"https://doi.org/10.1109/tmi.2025.3638630","url":null,"abstract":"Cone-Beam Computed Tomography (CBCT) provides real-time three-dimensional (3D) imaging support for intraoperative navigation. However, high-attenuation metal implants introduce severe metal artifacts in reconstructed CBCT images. These artifacts compromise image quality and therefore may affect diagnostic accuracy. Current CBCT metal artifact reduction (MAR) algorithms overlook the complementary information available across CBCT views, leading to inaccurate projection-domain interpolation and secondary artifacts in the reconstructed images. To tackle these challenges, we propose a novel Unsupervised Projection-domain Multiview Constraint Learning Network (UPMCL-Net), which directly learns from metal-affected data for CBCT MAR without ground truths. In addition, a transformer-based MultiView Consistency Module (MVCM) is constructed to interpolate the projection-domain metal region for cross-view consistency. Finally, a Hybrid Feature Attention Module (HFAM) is designed to adaptively fuse interview and intraview features. Comprehensive experiments conducted on real clinical datasets confirm the performance of UPMCL-Net, showcasing its potential as an efficient, accurate, and reliable approach for CBCT MAR in clinical intraoperative interventions.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"126 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145613375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-25DOI: 10.1109/tmi.2025.3636868
Yuanhe Tian, Zexuan Yan, Nenan Lyu, Yan Song
{"title":"Extractive Radiology Reporting with Memory-based Cross-modal Representations","authors":"Yuanhe Tian, Zexuan Yan, Nenan Lyu, Yan Song","doi":"10.1109/tmi.2025.3636868","DOIUrl":"https://doi.org/10.1109/tmi.2025.3636868","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"52 1","pages":"1-1"},"PeriodicalIF":10.6,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145599302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-25DOI: 10.1109/tmi.2025.3636922
Xiangtao Wang, Ruizhi Wang, Thomas Lukasiewicz, Zhenghua Xu
{"title":"AMLP: Adjustable Masking Lesion Patches for Self-Supervised Medical Image Segmentation","authors":"Xiangtao Wang, Ruizhi Wang, Thomas Lukasiewicz, Zhenghua Xu","doi":"10.1109/tmi.2025.3636922","DOIUrl":"https://doi.org/10.1109/tmi.2025.3636922","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"90 1","pages":"1-1"},"PeriodicalIF":10.6,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145599305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}