HiCur-NPC: Hierarchical Feature Fusion Curriculum Learning for Multi-Modal Foundation Model in Nasopharyngeal Carcinoma

Zipei Wang;Mengjie Fang;Linglong Tang;Jie Tian;Di Dong
{"title":"HiCur-NPC: Hierarchical Feature Fusion Curriculum Learning for Multi-Modal Foundation Model in Nasopharyngeal Carcinoma","authors":"Zipei Wang;Mengjie Fang;Linglong Tang;Jie Tian;Di Dong","doi":"10.1109/TMI.2025.3558775","DOIUrl":null,"url":null,"abstract":"Providing precise and comprehensive diagnostic information to clinicians is crucial for improving the treatment and prognosis of nasopharyngeal carcinoma. Multi-modal foundation models, which can integrate data from various sources, have the potential to significantly enhance clinical assistance. However, several challenges remain: (1) the lack of large-scale visual-language datasets for nasopharyngeal carcinoma; (2) the inability of existing pre-training and fine-tuning methods to capture the hierarchical features required for complex clinical tasks; (3) current foundation models having limited visual perception due to inadequate integration of multi-modal information. While curriculum learning can improve a model’s ability to handle multiple tasks through systematic knowledge accumulation, it still lacks consideration for hierarchical features and their dependencies, affecting knowledge gains. To address these issues, we propose the Hierarchical Feature Fusion Curriculum Learning method, which consists of three stages: visual knowledge learning, coarse-grained alignment, and fine-grained fusion. First, we introduce the Hybrid Contrastive Masked Autoencoder to pre-train visual encoders on 755K multi-modal images of nasopharyngeal carcinoma CT, MRI, and endoscopy to fully extract deep visual information. Then, we construct a 65K visual instruction fine-tuning dataset based on open-source data and clinician diagnostic reports, achieving coarse-grained alignment with visual information in a large language model. Finally, we design a Mixture of Experts Cross Attention structure for deep fine-grained fusion of global multi-modal information. Our model outperforms previously developed specialized models in all key clinical tasks for nasopharyngeal carcinoma, including diagnosis, report generation, tumor segmentation, and prognosis.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 10","pages":"3997-4009"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on medical imaging","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10959026/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Providing precise and comprehensive diagnostic information to clinicians is crucial for improving the treatment and prognosis of nasopharyngeal carcinoma. Multi-modal foundation models, which can integrate data from various sources, have the potential to significantly enhance clinical assistance. However, several challenges remain: (1) the lack of large-scale visual-language datasets for nasopharyngeal carcinoma; (2) the inability of existing pre-training and fine-tuning methods to capture the hierarchical features required for complex clinical tasks; (3) current foundation models having limited visual perception due to inadequate integration of multi-modal information. While curriculum learning can improve a model’s ability to handle multiple tasks through systematic knowledge accumulation, it still lacks consideration for hierarchical features and their dependencies, affecting knowledge gains. To address these issues, we propose the Hierarchical Feature Fusion Curriculum Learning method, which consists of three stages: visual knowledge learning, coarse-grained alignment, and fine-grained fusion. First, we introduce the Hybrid Contrastive Masked Autoencoder to pre-train visual encoders on 755K multi-modal images of nasopharyngeal carcinoma CT, MRI, and endoscopy to fully extract deep visual information. Then, we construct a 65K visual instruction fine-tuning dataset based on open-source data and clinician diagnostic reports, achieving coarse-grained alignment with visual information in a large language model. Finally, we design a Mixture of Experts Cross Attention structure for deep fine-grained fusion of global multi-modal information. Our model outperforms previously developed specialized models in all key clinical tasks for nasopharyngeal carcinoma, including diagnosis, report generation, tumor segmentation, and prognosis.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
HiCur-NPC:鼻咽癌多模态基础模型的分层特征融合课程学习
为临床医生提供准确、全面的诊断信息对改善鼻咽癌的治疗和预后至关重要。多模式基础模型可以整合来自不同来源的数据,具有显著增强临床援助的潜力。然而,一些挑战仍然存在:(1)缺乏鼻咽癌的大规模视觉语言数据集;(2)现有的预训练和微调方法无法捕捉复杂临床任务所需的分层特征;(3)现有基础模型对多模态信息整合不足,视觉感知能力有限。虽然课程学习可以通过系统的知识积累来提高模型处理多任务的能力,但它仍然缺乏对层次特征及其依赖关系的考虑,影响了知识的获得。为了解决这些问题,我们提出了分层特征融合课程学习方法,该方法包括三个阶段:视觉知识学习、粗粒度对齐和细粒度融合。首先,我们引入混合对比掩蔽自编码器对鼻咽癌CT、MRI和内镜的755K多模态图像进行视觉编码器的预训练,充分提取深度视觉信息。然后,我们基于开源数据和临床医生诊断报告构建了一个65K的视觉指令微调数据集,在一个大的语言模型中实现了与视觉信息的粗粒度对齐。最后,我们设计了一个混合专家交叉注意结构,用于全局多模态信息的深度细粒度融合。我们的模型在鼻咽癌的所有关键临床任务中都优于先前开发的专门模型,包括诊断,报告生成,肿瘤分割和预后。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Benchmark of Segmentation Techniques for Pelvic Fracture in CT and X-Ray: Summary of the PENGWIN 2024 Challenge. Regression is all you need for medical image translation. NeeCo: Image Synthesis of Novel Instrument States Based on Dynamic and Deformable 3D Gaussian Reconstruction. Anatomy-aware Sketch-guided Latent Diffusion Model for Orbital Tumor Multi-Parametric MRI Missing Modalities Synthesis. Contrastive Graph Modeling for Cross-Domain Few-Shot Medical Image Segmentation.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1