Yujia Xia, Jie Zhou, Xiaolei Xun, Jin Zhang, Ting Wei, Ruitian Gao, Bobby Reddy, Chao Liu, Geoffrey Kim, Zhangsheng Yu
{"title":"基于 CT 的多模态深度学习用于对接受免疫疗法的晚期肝细胞癌患者进行无创总生存期预测","authors":"Yujia Xia, Jie Zhou, Xiaolei Xun, Jin Zhang, Ting Wei, Ruitian Gao, Bobby Reddy, Chao Liu, Geoffrey Kim, Zhangsheng Yu","doi":"10.1186/s13244-024-01784-8","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>To develop a deep learning model combining CT scans and clinical information to predict overall survival in advanced hepatocellular carcinoma (HCC).</p><p><strong>Methods: </strong>This retrospective study included immunotherapy-treated advanced HCC patients from 52 multi-national in-house centers between 2018 and 2022. A multi-modal prognostic model using baseline and the first follow-up CT images and 7 clinical variables was proposed. A convolutional-recurrent neural network (CRNN) was developed to extract spatial-temporal information from automatically selected representative 2D CT slices to provide a radiological score, then fused with a Cox-based clinical score to provide the survival risk. The model's effectiveness was assessed using a time-dependent area under the receiver operating curve (AUC), and risk group stratification using the log-rank test. Prognostic performances of multi-modal inputs were compared to models of missing modality, and the size-based RECIST criteria.</p><p><strong>Results: </strong>Two-hundred seven patients (mean age, 61 years ± 12 [SD], 180 men) were included. The multi-modal CRNN model reached the AUC of 0.777 and 0.704 of 1-year overall survival predictions in the validation and test sets. The model achieved significant risk stratification in validation (hazard ratio [HR] = 3.330, p = 0.008), and test sets (HR = 2.024, p = 0.047) based on the median risk score of the training set. Models with missing modalities (the single-modal imaging-based model and the model incorporating only baseline scans) can still achieve favorable risk stratification performance (all p < 0.05, except for one, p = 0.053). Moreover, results proved the superiority of the deep learning-based model to the RECIST criteria.</p><p><strong>Conclusion: </strong>Deep learning analysis of CT scans and clinical data can offer significant prognostic insights for patients with advanced HCC.</p><p><strong>Critical relevance statement: </strong>The established model can help monitor patients' disease statuses and identify those with poor prognosis at the time of first follow-up, helping clinicians make informed treatment decisions, as well as early and timely interventions.</p><p><strong>Key points: </strong>An AI-based prognostic model was developed for advanced HCC using multi-national patients. The model extracts spatial-temporal information from CT scans and integrates it with clinical variables to prognosticate. The model demonstrated superior prognostic ability compared to the conventional size-based RECIST method.</p>","PeriodicalId":13639,"journal":{"name":"Insights into Imaging","volume":"15 1","pages":"214"},"PeriodicalIF":4.1000,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11347550/pdf/","citationCount":"0","resultStr":"{\"title\":\"CT-based multimodal deep learning for non-invasive overall survival prediction in advanced hepatocellular carcinoma patients treated with immunotherapy.\",\"authors\":\"Yujia Xia, Jie Zhou, Xiaolei Xun, Jin Zhang, Ting Wei, Ruitian Gao, Bobby Reddy, Chao Liu, Geoffrey Kim, Zhangsheng Yu\",\"doi\":\"10.1186/s13244-024-01784-8\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objectives: </strong>To develop a deep learning model combining CT scans and clinical information to predict overall survival in advanced hepatocellular carcinoma (HCC).</p><p><strong>Methods: </strong>This retrospective study included immunotherapy-treated advanced HCC patients from 52 multi-national in-house centers between 2018 and 2022. A multi-modal prognostic model using baseline and the first follow-up CT images and 7 clinical variables was proposed. A convolutional-recurrent neural network (CRNN) was developed to extract spatial-temporal information from automatically selected representative 2D CT slices to provide a radiological score, then fused with a Cox-based clinical score to provide the survival risk. The model's effectiveness was assessed using a time-dependent area under the receiver operating curve (AUC), and risk group stratification using the log-rank test. Prognostic performances of multi-modal inputs were compared to models of missing modality, and the size-based RECIST criteria.</p><p><strong>Results: </strong>Two-hundred seven patients (mean age, 61 years ± 12 [SD], 180 men) were included. The multi-modal CRNN model reached the AUC of 0.777 and 0.704 of 1-year overall survival predictions in the validation and test sets. The model achieved significant risk stratification in validation (hazard ratio [HR] = 3.330, p = 0.008), and test sets (HR = 2.024, p = 0.047) based on the median risk score of the training set. Models with missing modalities (the single-modal imaging-based model and the model incorporating only baseline scans) can still achieve favorable risk stratification performance (all p < 0.05, except for one, p = 0.053). Moreover, results proved the superiority of the deep learning-based model to the RECIST criteria.</p><p><strong>Conclusion: </strong>Deep learning analysis of CT scans and clinical data can offer significant prognostic insights for patients with advanced HCC.</p><p><strong>Critical relevance statement: </strong>The established model can help monitor patients' disease statuses and identify those with poor prognosis at the time of first follow-up, helping clinicians make informed treatment decisions, as well as early and timely interventions.</p><p><strong>Key points: </strong>An AI-based prognostic model was developed for advanced HCC using multi-national patients. The model extracts spatial-temporal information from CT scans and integrates it with clinical variables to prognosticate. The model demonstrated superior prognostic ability compared to the conventional size-based RECIST method.</p>\",\"PeriodicalId\":13639,\"journal\":{\"name\":\"Insights into Imaging\",\"volume\":\"15 1\",\"pages\":\"214\"},\"PeriodicalIF\":4.1000,\"publicationDate\":\"2024-08-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11347550/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Insights into Imaging\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1186/s13244-024-01784-8\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Insights into Imaging","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s13244-024-01784-8","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
CT-based multimodal deep learning for non-invasive overall survival prediction in advanced hepatocellular carcinoma patients treated with immunotherapy.
Objectives: To develop a deep learning model combining CT scans and clinical information to predict overall survival in advanced hepatocellular carcinoma (HCC).
Methods: This retrospective study included immunotherapy-treated advanced HCC patients from 52 multi-national in-house centers between 2018 and 2022. A multi-modal prognostic model using baseline and the first follow-up CT images and 7 clinical variables was proposed. A convolutional-recurrent neural network (CRNN) was developed to extract spatial-temporal information from automatically selected representative 2D CT slices to provide a radiological score, then fused with a Cox-based clinical score to provide the survival risk. The model's effectiveness was assessed using a time-dependent area under the receiver operating curve (AUC), and risk group stratification using the log-rank test. Prognostic performances of multi-modal inputs were compared to models of missing modality, and the size-based RECIST criteria.
Results: Two-hundred seven patients (mean age, 61 years ± 12 [SD], 180 men) were included. The multi-modal CRNN model reached the AUC of 0.777 and 0.704 of 1-year overall survival predictions in the validation and test sets. The model achieved significant risk stratification in validation (hazard ratio [HR] = 3.330, p = 0.008), and test sets (HR = 2.024, p = 0.047) based on the median risk score of the training set. Models with missing modalities (the single-modal imaging-based model and the model incorporating only baseline scans) can still achieve favorable risk stratification performance (all p < 0.05, except for one, p = 0.053). Moreover, results proved the superiority of the deep learning-based model to the RECIST criteria.
Conclusion: Deep learning analysis of CT scans and clinical data can offer significant prognostic insights for patients with advanced HCC.
Critical relevance statement: The established model can help monitor patients' disease statuses and identify those with poor prognosis at the time of first follow-up, helping clinicians make informed treatment decisions, as well as early and timely interventions.
Key points: An AI-based prognostic model was developed for advanced HCC using multi-national patients. The model extracts spatial-temporal information from CT scans and integrates it with clinical variables to prognosticate. The model demonstrated superior prognostic ability compared to the conventional size-based RECIST method.
期刊介绍:
Insights into Imaging (I³) is a peer-reviewed open access journal published under the brand SpringerOpen. All content published in the journal is freely available online to anyone, anywhere!
I³ continuously updates scientific knowledge and progress in best-practice standards in radiology through the publication of original articles and state-of-the-art reviews and opinions, along with recommendations and statements from the leading radiological societies in Europe.
Founded by the European Society of Radiology (ESR), I³ creates a platform for educational material, guidelines and recommendations, and a forum for topics of controversy.
A balanced combination of review articles, original papers, short communications from European radiological congresses and information on society matters makes I³ an indispensable source for current information in this field.
I³ is owned by the ESR, however authors retain copyright to their article according to the Creative Commons Attribution License (see Copyright and License Agreement). All articles can be read, redistributed and reused for free, as long as the author of the original work is cited properly.
The open access fees (article-processing charges) for this journal are kindly sponsored by ESR for all Members.
The journal went open access in 2012, which means that all articles published since then are freely available online.