Yuan Gao, Tao Tan, Xin Wang, Regina Beets-Tan, Tianyu Zhang, Luyi Han, Antonio Portaluri, Chunyao Lu, Xinglong Liang, Jonas Teuwen, Hong-Yu Zhou, Ritse Mann
{"title":"Multi-modal Longitudinal Representation Learning for Predicting Neoadjuvant Therapy Response in Breast Cancer Treatment.","authors":"Yuan Gao, Tao Tan, Xin Wang, Regina Beets-Tan, Tianyu Zhang, Luyi Han, Antonio Portaluri, Chunyao Lu, Xinglong Liang, Jonas Teuwen, Hong-Yu Zhou, Ritse Mann","doi":"10.1109/JBHI.2025.3540574","DOIUrl":null,"url":null,"abstract":"<p><p>Longitudinal medical imaging is crucial for monitoring neoadjuvant therapy (NAT) response in clinical practice. However, mainstream artificial intelligence (AI) methods for disease monitoring commonly rely on extensive segmentation labels to evaluate lesion progression. While self-supervised vision-language (VL) learning efficiently captures medical knowledge from radiology reports, existing methods focus on single time points, missing opportunities to leverage temporal self-supervision for disease progression tracking. In addition, extracting dynamic progression from longitudinal unannotated images with corresponding textual data poses challenges. In this work, we explicitly account for longitudinal NAT examinations and accompanying reports, encompassing scans before NAT and follow-up scans during mid-/post-NAT. We introduce the multi-modal longitudinal representation learning pipeline (MLRL), a temporal foundation model, that employs multi-scale self-supervision scheme, including single-time scale vision-text alignment (VTA) learning and multi-time scale visual/textual progress (TVP/TTP) learning to extract temporal representations from each modality, thereby facilitates the downstream evaluation of tumor progress. Our method is evaluated against several state-of-the-art self-supervised longitudinal learning and multi-modal VL methods. Results from internal and external datasets demonstrate that our approach not only enhances label efficiency across the zero-, few- and full-shot regime experiments but also significantly improves tumor response prediction in diverse treatment scenarios. Furthermore, MLRL enables interpretable visual tracking of progressive areas in temporal examinations, offering insights into longitudinal VL foundation tools and potentially facilitating the temporal clinical decision-making process.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7000,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal of Biomedical and Health Informatics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1109/JBHI.2025.3540574","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Longitudinal medical imaging is crucial for monitoring neoadjuvant therapy (NAT) response in clinical practice. However, mainstream artificial intelligence (AI) methods for disease monitoring commonly rely on extensive segmentation labels to evaluate lesion progression. While self-supervised vision-language (VL) learning efficiently captures medical knowledge from radiology reports, existing methods focus on single time points, missing opportunities to leverage temporal self-supervision for disease progression tracking. In addition, extracting dynamic progression from longitudinal unannotated images with corresponding textual data poses challenges. In this work, we explicitly account for longitudinal NAT examinations and accompanying reports, encompassing scans before NAT and follow-up scans during mid-/post-NAT. We introduce the multi-modal longitudinal representation learning pipeline (MLRL), a temporal foundation model, that employs multi-scale self-supervision scheme, including single-time scale vision-text alignment (VTA) learning and multi-time scale visual/textual progress (TVP/TTP) learning to extract temporal representations from each modality, thereby facilitates the downstream evaluation of tumor progress. Our method is evaluated against several state-of-the-art self-supervised longitudinal learning and multi-modal VL methods. Results from internal and external datasets demonstrate that our approach not only enhances label efficiency across the zero-, few- and full-shot regime experiments but also significantly improves tumor response prediction in diverse treatment scenarios. Furthermore, MLRL enables interpretable visual tracking of progressive areas in temporal examinations, offering insights into longitudinal VL foundation tools and potentially facilitating the temporal clinical decision-making process.
期刊介绍:
IEEE Journal of Biomedical and Health Informatics publishes original papers presenting recent advances where information and communication technologies intersect with health, healthcare, life sciences, and biomedicine. Topics include acquisition, transmission, storage, retrieval, management, and analysis of biomedical and health information. The journal covers applications of information technologies in healthcare, patient monitoring, preventive care, early disease diagnosis, therapy discovery, and personalized treatment protocols. It explores electronic medical and health records, clinical information systems, decision support systems, medical and biological imaging informatics, wearable systems, body area/sensor networks, and more. Integration-related topics like interoperability, evidence-based medicine, and secure patient data are also addressed.