首页 > 最新文献

IEEE Transactions on Medical Imaging最新文献

英文 中文
High Volume Rate 3D Ultrasound Reconstruction with Diffusion Models 基于弥散模型的高体积率三维超声重建
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-12-18 DOI: 10.1109/tmi.2025.3645849
Tristan S.W. Stevens, Oisín Nolan, Oudom Somphone, Jean-Luc Robert, Ruud J.G. Van Sloun
{"title":"High Volume Rate 3D Ultrasound Reconstruction with Diffusion Models","authors":"Tristan S.W. Stevens, Oisín Nolan, Oudom Somphone, Jean-Luc Robert, Ruud J.G. Van Sloun","doi":"10.1109/tmi.2025.3645849","DOIUrl":"https://doi.org/10.1109/tmi.2025.3645849","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"125 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145777587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A General Framework for Efficient Medical Image Analysis via Shared Attention Vision Transformer 基于共享注意力视觉转换器的高效医学图像分析通用框架
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-12-17 DOI: 10.1109/tmi.2025.3644949
Yihang Liu, Ying Wen, Longzhen Yang, Lianghua He, Mengchu Zhou
{"title":"A General Framework for Efficient Medical Image Analysis via Shared Attention Vision Transformer","authors":"Yihang Liu, Ying Wen, Longzhen Yang, Lianghua He, Mengchu Zhou","doi":"10.1109/tmi.2025.3644949","DOIUrl":"https://doi.org/10.1109/tmi.2025.3644949","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"53 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145770724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MedicoSAM: Robust Improvement of SAM for Medical Imaging MedicoSAM:医学成像SAM的稳健改进
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-12-17 DOI: 10.1109/tmi.2025.3644811
Anwai Archit, Luca Freckmann, Constantin Pape
{"title":"MedicoSAM: Robust Improvement of SAM for Medical Imaging","authors":"Anwai Archit, Luca Freckmann, Constantin Pape","doi":"10.1109/tmi.2025.3644811","DOIUrl":"https://doi.org/10.1109/tmi.2025.3644811","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"155 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145770725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multifocal Optical-resolution Photoacoustic Microscopy with a Masked Single-element Transducer 多焦点光学分辨率光声显微镜与一个掩膜单元件换能器
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-12-12 DOI: 10.1109/tmi.2025.3643618
Xiaofei Luo, Rui Cao, Peng Hu, Yilin Luo, Yushun Zeng, Yide Zhang, Manxiu Cui, Qifa Zhou, Geng Ku, Lihong V. Wang
{"title":"Multifocal Optical-resolution Photoacoustic Microscopy with a Masked Single-element Transducer","authors":"Xiaofei Luo, Rui Cao, Peng Hu, Yilin Luo, Yushun Zeng, Yide Zhang, Manxiu Cui, Qifa Zhou, Geng Ku, Lihong V. Wang","doi":"10.1109/tmi.2025.3643618","DOIUrl":"https://doi.org/10.1109/tmi.2025.3643618","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"61 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145730673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Co-Seg++: Mutual Prompt-Guided Collaborative Learning for Versatile Medical Segmentation Co-Seg++:用于多功能医学分割的相互快速引导的协作学习
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-12-12 DOI: 10.1109/tmi.2025.3643631
Qing Xu, Yuxiang Luo, Wenting Duan, Zhen Chen
{"title":"Co-Seg++: Mutual Prompt-Guided Collaborative Learning for Versatile Medical Segmentation","authors":"Qing Xu, Yuxiang Luo, Wenting Duan, Zhen Chen","doi":"10.1109/tmi.2025.3643631","DOIUrl":"https://doi.org/10.1109/tmi.2025.3643631","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"29 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145730674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Constructing Effective Hyper-Connectivity Networks through Adaptive Directed Hypergraph Embedded Dictionary Learning: Application to Early Mild Cognitive Impairment Detection. 通过自适应有向超图嵌入式字典学习构建有效的超连接网络:在早期轻度认知障碍检测中的应用。
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-12-11 DOI: 10.1109/tmi.2025.3642294
Lan Yang,Yao Li,Chen Qiao
The accurate diagnosis of early mild cognitive impairment is crucial for timely intervention and treatment of dementia. But it is challenging to distinguish from normal aging due to its complex pathology and mild symptoms. Recently, effective hyper-connectivity identified through directed hypergraph can be considered as an effective analysis approach for early detection of mild cognitive impairment and exploration of its underlying neural mechanisms, because it captures directional higher-order interactions across multiple brain regions. However, current methods face limitations, including inefficiency in high-dimensional spaces, sensitivity to noise, reliance on manually defined structures, lack of global structural information, and static learning mechanisms. To address these issues, we integrate robust dictionary learning with directed hypergraph structure learning within a unified framework. This approach jointly estimates low-dimensional sparse representations and the directed hypergraph. The integration allows both processes to dynamically reinforce each other, leading to the refinement of the directed hypergraph, which improves the estimation of low-dimensional sparse representations and, in turn, enhances the quality of the directed hypergraph estimation. Experimental analyses on simulated data confirm the positive interplay between these processes, demonstrating the effectiveness of the proposed collaborative learning strategy. Furthermore, results on real-world brain signal data show that the proposed method is highly competitive in early detection of mild cognitive impairment, highlighting its ability to identify effective hyper-connectivity networks with significant differences.
早期轻度认知障碍的准确诊断对于痴呆的及时干预和治疗至关重要。但由于其病理复杂,症状轻微,很难与正常衰老区分开来。最近,通过有向超图识别的有效超连接可以被认为是早期发现轻度认知障碍和探索其潜在神经机制的有效分析方法,因为它捕获了多个大脑区域之间的定向高阶相互作用。然而,目前的方法面临着局限性,包括在高维空间中效率低下、对噪声敏感、依赖于手动定义的结构、缺乏全局结构信息和静态学习机制。为了解决这些问题,我们将鲁棒字典学习与有向超图结构学习集成在一个统一的框架内。该方法联合估计低维稀疏表示和有向超图。这种集成允许两个过程动态地相互增强,从而导致有向超图的细化,从而改进了对低维稀疏表示的估计,进而提高了有向超图估计的质量。模拟数据的实验分析证实了这些过程之间的积极相互作用,证明了所提出的协作学习策略的有效性。此外,现实世界脑信号数据的结果表明,该方法在轻度认知障碍的早期检测中具有很强的竞争力,突出了其识别有效超连接网络的能力。
{"title":"Constructing Effective Hyper-Connectivity Networks through Adaptive Directed Hypergraph Embedded Dictionary Learning: Application to Early Mild Cognitive Impairment Detection.","authors":"Lan Yang,Yao Li,Chen Qiao","doi":"10.1109/tmi.2025.3642294","DOIUrl":"https://doi.org/10.1109/tmi.2025.3642294","url":null,"abstract":"The accurate diagnosis of early mild cognitive impairment is crucial for timely intervention and treatment of dementia. But it is challenging to distinguish from normal aging due to its complex pathology and mild symptoms. Recently, effective hyper-connectivity identified through directed hypergraph can be considered as an effective analysis approach for early detection of mild cognitive impairment and exploration of its underlying neural mechanisms, because it captures directional higher-order interactions across multiple brain regions. However, current methods face limitations, including inefficiency in high-dimensional spaces, sensitivity to noise, reliance on manually defined structures, lack of global structural information, and static learning mechanisms. To address these issues, we integrate robust dictionary learning with directed hypergraph structure learning within a unified framework. This approach jointly estimates low-dimensional sparse representations and the directed hypergraph. The integration allows both processes to dynamically reinforce each other, leading to the refinement of the directed hypergraph, which improves the estimation of low-dimensional sparse representations and, in turn, enhances the quality of the directed hypergraph estimation. Experimental analyses on simulated data confirm the positive interplay between these processes, demonstrating the effectiveness of the proposed collaborative learning strategy. Furthermore, results on real-world brain signal data show that the proposed method is highly competitive in early detection of mild cognitive impairment, highlighting its ability to identify effective hyper-connectivity networks with significant differences.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"29 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145728473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BONBID-HIE 2023: Lesion Segmentation Challenge in BOston Neonatal Brain Injury Data for Hypoxic Ischemic Encephalopathy. BONBID-HIE 2023:波士顿新生儿缺氧缺血性脑病脑损伤数据的病灶分割挑战。
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-12-11 DOI: 10.1109/tmi.2025.3638977
Rina Bao,Anna N Foster,Ya'nan Song,Rutvi Vyas,Ankush Kesri,Imad Eddine Toubal,Elham Soltani Kazemi,Gani Rahmon,Taci Kucukpinar,Mohamed Almansour,Mai-Lan Ho,K Palaniappan,Dean Ninalga,Chiranjeewee Prasad Koirala,Sovesh Mohapatra,Gottfried Schlaug,Marek Wodzinski,Henning Muller,David G Ellis,Michele R Aizenberg,M Arda Aydin,Elvin Abdinli,Gozde Unal,Nazanin Tahmasebi,Kumaradevan Punithakumar,Tian Song,Yun Peng,Sara V Bates,Randy Hirschtick,P Ellen Grant,Yangming Ou
Hypoxic Ischemic Encephalopathy (HIE) represents a brain dysfunction, affecting approximately 1 to 5 per 1000 full-term neonates. The precise delineation and segmentation of HIE-related lesions in neonatal brain Magnetic Resonance Images (MRI) are pivotal in advancing outcome predictions, identifying patients at high risk, elucidating neurological manifestations, and assessing treatment efficacies. Despite its importance, the development of algorithms for segmenting HIE lesions from MRI volumes has been impeded by data scarcity. Addressing this critical gap, we organized the first BONBID-HIE challenge with diffusion MRI data (Apparent Diffusion Coefficient (ADC) maps) for HIE lesion segmentation, in conjunction with the MICCAI 2023. Totally 14 algorithms were submitted, employing a gamut of cutting-edge automatic machine-learning-based segmentation algorithms. Our comprehensive analysis of HIE lesion segmentation and submitted algorithms facilitates an in-depth evaluation of the current technological zenith, outlines directions for future advancements, and highlights persistent hurdles. To foster ongoing research and benchmarking, the annotated HIE dataset, developed algorithm dockers, and unified evaluation codes are accessible through a dedicated online platform (https://bonbid-hie2023.grand-challenge.org).
缺氧缺血性脑病(HIE)是一种脑功能障碍,每1000个足月新生儿中约有1至5个受到影响。新生儿脑磁共振成像(MRI)对hie相关病变的精确描绘和分割对于推进预后预测、识别高危患者、阐明神经学表现和评估治疗效果至关重要。尽管其重要性,从MRI体积中分割HIE病变的算法的发展一直受到数据稀缺的阻碍。为了解决这一关键问题,我们结合MICCAI 2023,利用弥散MRI数据(表观扩散系数(ADC)图)组织了第一个BONBID-HIE挑战,用于HIE病变分割。总共提交了14种算法,采用了一系列尖端的基于机器学习的自动分割算法。我们对HIE病变分割和提交算法的全面分析有助于对当前技术顶峰进行深入评估,概述未来发展方向,并强调持续存在的障碍。为了促进正在进行的研究和基准测试,注释的HIE数据集、开发的算法码头和统一的评估代码可通过专门的在线平台(https://bonbid-hie2023.grand-challenge.org)访问。
{"title":"BONBID-HIE 2023: Lesion Segmentation Challenge in BOston Neonatal Brain Injury Data for Hypoxic Ischemic Encephalopathy.","authors":"Rina Bao,Anna N Foster,Ya'nan Song,Rutvi Vyas,Ankush Kesri,Imad Eddine Toubal,Elham Soltani Kazemi,Gani Rahmon,Taci Kucukpinar,Mohamed Almansour,Mai-Lan Ho,K Palaniappan,Dean Ninalga,Chiranjeewee Prasad Koirala,Sovesh Mohapatra,Gottfried Schlaug,Marek Wodzinski,Henning Muller,David G Ellis,Michele R Aizenberg,M Arda Aydin,Elvin Abdinli,Gozde Unal,Nazanin Tahmasebi,Kumaradevan Punithakumar,Tian Song,Yun Peng,Sara V Bates,Randy Hirschtick,P Ellen Grant,Yangming Ou","doi":"10.1109/tmi.2025.3638977","DOIUrl":"https://doi.org/10.1109/tmi.2025.3638977","url":null,"abstract":"Hypoxic Ischemic Encephalopathy (HIE) represents a brain dysfunction, affecting approximately 1 to 5 per 1000 full-term neonates. The precise delineation and segmentation of HIE-related lesions in neonatal brain Magnetic Resonance Images (MRI) are pivotal in advancing outcome predictions, identifying patients at high risk, elucidating neurological manifestations, and assessing treatment efficacies. Despite its importance, the development of algorithms for segmenting HIE lesions from MRI volumes has been impeded by data scarcity. Addressing this critical gap, we organized the first BONBID-HIE challenge with diffusion MRI data (Apparent Diffusion Coefficient (ADC) maps) for HIE lesion segmentation, in conjunction with the MICCAI 2023. Totally 14 algorithms were submitted, employing a gamut of cutting-edge automatic machine-learning-based segmentation algorithms. Our comprehensive analysis of HIE lesion segmentation and submitted algorithms facilitates an in-depth evaluation of the current technological zenith, outlines directions for future advancements, and highlights persistent hurdles. To foster ongoing research and benchmarking, the annotated HIE dataset, developed algorithm dockers, and unified evaluation codes are accessible through a dedicated online platform (https://bonbid-hie2023.grand-challenge.org).","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"31 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145728460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Polar Subarea-Aware Fusion Net for Posterior Eyeball Shape Reconstruction. 极区感知融合网用于眼球后形状重建。
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-12-11 DOI: 10.1109/tmi.2025.3642381
Jiaqi Zhang,Xiuzhe Wu,Jiahui Liu,Chunyu Zou,Fengze Nie,Zicheng Sun,Xiaojuan Qi,Jiang Liu
High-fidelity reconstruction of the Posterior Eyeball Shape (PES) is crucial for early diagnosis and timely intervention of sight-threatening diseases such as high myopia, diabetic retinopathy, and glaucoma. However, existing magnetic resonance imaging (MRI)- and optical coherence tomography (OCT)-based methods either provide only coarse scleral geometry or suffer from suboptimal PES representations due to limited field of view (FOV) and detail loss, hindering accurate assessment of intact retinal pigment epithelium (RPE) abnormalities. In this study, we propose the Polar Subarea-Aware Fusion Net (PSAFNet), a novel end-to-end framework that reconstructs complete and high-fidelity PES directly from a single local OCT scan, even under clinically common settings with only 6.25% FOV. To avoid information loss, we reformulate PES reconstruction as a 2D dense regression task and introduce the Ocular Shape Map (OSM), an innovative lossless 2D representation that encodes 3D coordinate attributes into corresponding image channels. PSAFNet then leverages three dedicated modules-Subarea Feature Embedding Module (SFEM), Channel- and Patch-wise Fusion Blocks (CFB/PFB), and Reassemble and Up-sample Module (RUM)-to enhance positional awareness, integrate local-global features, and achieve high-resolution OSM prediction. Furthermore, we construct two large-scale datasets, POSDiag and PESGen, comprising 794 ultra-widefield OCT scans from diverse health conditions and imaging devices, providing a comprehensive benchmark for PES reconstruction. Extensive experiments demonstrate that PSAFNet consistently outperforms existing methods (e.g., EMD=5.58, AAL=97.3%) and exhibits strong clinical relevance, validated by superior performance in downstream disease classification and ophthalmologist evaluations (Expert-Score=82.78%). The source code of the proposed PSAFNet is released at https://github.com/HKUZJ77/PSAFNet.
高保真重建后眼球形状(PES)对于高度近视、糖尿病视网膜病变、青光眼等视力威胁疾病的早期诊断和及时干预至关重要。然而,现有的基于磁共振成像(MRI)和光学相干断层扫描(OCT)的方法要么只能提供粗糙的巩膜几何形状,要么由于有限的视野(FOV)和细节丢失而导致非最佳的PES表征,从而阻碍了对完整视网膜色素上皮(RPE)异常的准确评估。在这项研究中,我们提出了极地次区域感知融合网络(PSAFNet),这是一种新颖的端到端框架,即使在临床上常见的只有6.25%视场的情况下,也可以直接从单个局部OCT扫描重建完整的高保真PES。为了避免信息丢失,我们将PES重构重新表述为二维密集回归任务,并引入眼形图(OSM),这是一种创新的无损二维表示,将三维坐标属性编码到相应的图像通道中。然后,PSAFNet利用三个专用模块-子区域特征嵌入模块(SFEM),通道和补丁融合模块(CFB/PFB)以及重组和上样模块(RUM)-增强位置感知,集成局部-全局特征,并实现高分辨率OSM预测。此外,我们构建了POSDiag和PESGen两个大型数据集,包括794张来自不同健康状况和成像设备的超宽视场OCT扫描,为PES重建提供了一个全面的基准。大量实验表明,PSAFNet始终优于现有方法(例如,EMD=5.58, AAL=97.3%),并具有很强的临床相关性,在下游疾病分类和眼科医生评估方面表现优异(Expert-Score=82.78%)。拟议的PSAFNet的源代码发布在https://github.com/HKUZJ77/PSAFNet。
{"title":"Polar Subarea-Aware Fusion Net for Posterior Eyeball Shape Reconstruction.","authors":"Jiaqi Zhang,Xiuzhe Wu,Jiahui Liu,Chunyu Zou,Fengze Nie,Zicheng Sun,Xiaojuan Qi,Jiang Liu","doi":"10.1109/tmi.2025.3642381","DOIUrl":"https://doi.org/10.1109/tmi.2025.3642381","url":null,"abstract":"High-fidelity reconstruction of the Posterior Eyeball Shape (PES) is crucial for early diagnosis and timely intervention of sight-threatening diseases such as high myopia, diabetic retinopathy, and glaucoma. However, existing magnetic resonance imaging (MRI)- and optical coherence tomography (OCT)-based methods either provide only coarse scleral geometry or suffer from suboptimal PES representations due to limited field of view (FOV) and detail loss, hindering accurate assessment of intact retinal pigment epithelium (RPE) abnormalities. In this study, we propose the Polar Subarea-Aware Fusion Net (PSAFNet), a novel end-to-end framework that reconstructs complete and high-fidelity PES directly from a single local OCT scan, even under clinically common settings with only 6.25% FOV. To avoid information loss, we reformulate PES reconstruction as a 2D dense regression task and introduce the Ocular Shape Map (OSM), an innovative lossless 2D representation that encodes 3D coordinate attributes into corresponding image channels. PSAFNet then leverages three dedicated modules-Subarea Feature Embedding Module (SFEM), Channel- and Patch-wise Fusion Blocks (CFB/PFB), and Reassemble and Up-sample Module (RUM)-to enhance positional awareness, integrate local-global features, and achieve high-resolution OSM prediction. Furthermore, we construct two large-scale datasets, POSDiag and PESGen, comprising 794 ultra-widefield OCT scans from diverse health conditions and imaging devices, providing a comprehensive benchmark for PES reconstruction. Extensive experiments demonstrate that PSAFNet consistently outperforms existing methods (e.g., EMD=5.58, AAL=97.3%) and exhibits strong clinical relevance, validated by superior performance in downstream disease classification and ophthalmologist evaluations (Expert-Score=82.78%). The source code of the proposed PSAFNet is released at https://github.com/HKUZJ77/PSAFNet.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"38 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145728474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Block-Champagne: A Novel Bayesian Framework for Imaging Extended E/MEG Source Block-Champagne:一种用于扩展E/MEG源成像的新型贝叶斯框架
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-12-10 DOI: 10.1109/tmi.2025.3642620
Zhao Feng, Cuntai Guan, Yu Sun
{"title":"Block-Champagne: A Novel Bayesian Framework for Imaging Extended E/MEG Source","authors":"Zhao Feng, Cuntai Guan, Yu Sun","doi":"10.1109/tmi.2025.3642620","DOIUrl":"https://doi.org/10.1109/tmi.2025.3642620","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"2 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145717931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Facilitate Robust Early Screening of Cerebral Palsy via General Movements Assessment with Multi-Modality Co-Learning. 通过多模态共同学习的一般运动评估促进脑瘫的早期筛查。
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-12-09 DOI: 10.1109/tmi.2025.3641894
Wang Yin,Chunling Huang,Linxi Chen,Xinrui Huang,Zhaohong Wang,Yang Bian,Yuan Zhou,You Wan,Tongyan Han,Ming Yi
General movement assessment (GMA) is a non-invasive method used to evaluate neuromotor behavior in infants under six months of age and is considered a reliable tool for the early detection of cerebral palsy (CP). However, traditional GMA relies on the subjective judgment of multiple internationally certified physicians, making it time-consuming and limiting its accessibility for widespread use. Furthermore, artificial intelligence (AI) approaches may overcome these limitations but are usually based on motion skeletons and lack the ability to capture detailed body information. Here, we propose CoGMA (Collaborative General Movements Assessment), a novel multi-modality co-learning framework for GMA. By integrating multimodal large language model as auxiliary network during training, CoGMA incorporates four types of input data-skeleton data, clinical information, RGB video, and text descriptions-to enhance representation learning. During inference, however, CoGMA achieves efficient and accurate prediction using only skeleton data and clinical information. Experimental evaluations indicate that CoGMA demonstrates robust performance across both the writhing and fidgety movement stages, while also excelling in zero-shot evaluation of fidget movement, thereby mitigating the issue of limited training samples in this stage. This framework significantly enhances the GMA methodology and lays the groundwork for future advancements in early detection and research on infant neuromotor behavior. Additionally, to facilitate anonymized data sharing, we introduce InfantAnimator, a tool that generates non-identifiable videos while preserving essential motion features, thereby supporting broader research and collaboration. The code is available at GitHub: https://github.com/wwYinYin/CoGMA.
一般运动评估(GMA)是一种用于评估6个月以下婴儿神经运动行为的非侵入性方法,被认为是早期发现脑瘫(CP)的可靠工具。然而,传统的GMA依赖于多个国际认证医生的主观判断,使其耗时且限制了其广泛使用的可及性。此外,人工智能(AI)方法可以克服这些限制,但通常基于运动骨架,缺乏捕获详细身体信息的能力。本文提出了一种新型的多模态协同学习框架CoGMA (Collaborative General Movements Assessment)。CoGMA通过在训练过程中集成多模态大语言模型作为辅助网络,将骨架数据、临床信息、RGB视频和文本描述四种类型的输入结合起来,增强表征学习。然而,在推理过程中,CoGMA仅使用骨骼数据和临床信息即可实现高效准确的预测。实验评估表明,CoGMA在扭动和烦躁运动阶段都表现出稳健的性能,同时在烦躁运动的零射击评估方面也表现出色,从而缓解了这一阶段训练样本有限的问题。该框架显著增强了GMA方法,并为婴儿神经运动行为的早期检测和研究奠定了基础。此外,为了促进匿名数据共享,我们引入了InfantAnimator,这是一种生成不可识别视频的工具,同时保留了基本的运动特征,从而支持更广泛的研究和合作。代码可在GitHub: https://github.com/wwYinYin/CoGMA。
{"title":"Facilitate Robust Early Screening of Cerebral Palsy via General Movements Assessment with Multi-Modality Co-Learning.","authors":"Wang Yin,Chunling Huang,Linxi Chen,Xinrui Huang,Zhaohong Wang,Yang Bian,Yuan Zhou,You Wan,Tongyan Han,Ming Yi","doi":"10.1109/tmi.2025.3641894","DOIUrl":"https://doi.org/10.1109/tmi.2025.3641894","url":null,"abstract":"General movement assessment (GMA) is a non-invasive method used to evaluate neuromotor behavior in infants under six months of age and is considered a reliable tool for the early detection of cerebral palsy (CP). However, traditional GMA relies on the subjective judgment of multiple internationally certified physicians, making it time-consuming and limiting its accessibility for widespread use. Furthermore, artificial intelligence (AI) approaches may overcome these limitations but are usually based on motion skeletons and lack the ability to capture detailed body information. Here, we propose CoGMA (Collaborative General Movements Assessment), a novel multi-modality co-learning framework for GMA. By integrating multimodal large language model as auxiliary network during training, CoGMA incorporates four types of input data-skeleton data, clinical information, RGB video, and text descriptions-to enhance representation learning. During inference, however, CoGMA achieves efficient and accurate prediction using only skeleton data and clinical information. Experimental evaluations indicate that CoGMA demonstrates robust performance across both the writhing and fidgety movement stages, while also excelling in zero-shot evaluation of fidget movement, thereby mitigating the issue of limited training samples in this stage. This framework significantly enhances the GMA methodology and lays the groundwork for future advancements in early detection and research on infant neuromotor behavior. Additionally, to facilitate anonymized data sharing, we introduce InfantAnimator, a tool that generates non-identifiable videos while preserving essential motion features, thereby supporting broader research and collaboration. The code is available at GitHub: https://github.com/wwYinYin/CoGMA.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"22 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145710799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Medical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1