A multimodal framework for assessing the link between pathomics, transcriptomics, and pancreatic cancer mutations

IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Computerized Medical Imaging and Graphics Pub Date : 2025-07-01 Epub Date: 2025-03-15 DOI:10.1016/j.compmedimag.2025.102526
Francesco Berloco, Gian Maria Zaccaria, Nicola Altini, Simona Colucci, Vitoantonio Bevilacqua
{"title":"A multimodal framework for assessing the link between pathomics, transcriptomics, and pancreatic cancer mutations","authors":"Francesco Berloco,&nbsp;Gian Maria Zaccaria,&nbsp;Nicola Altini,&nbsp;Simona Colucci,&nbsp;Vitoantonio Bevilacqua","doi":"10.1016/j.compmedimag.2025.102526","DOIUrl":null,"url":null,"abstract":"<div><div>In Pancreatic Ductal Adenocarcinoma (PDAC), predicting genetic mutations directly from histopathological images using Deep Learning can provide valuable insights. The combination of several omics can provide further knowledge on mechanisms underlying tumor biology. This study aimed at developing an explainable multimodal pipeline to predict genetic mutations for the <em>KRAS</em>, <em>TP53</em>, <em>SMAD4</em>, and <em>CDKN2A</em> genes, integrating pathomic features with transcriptomics from two independent datasets, the TCGA-PAAD, assumed as training set, and the CPTAC-PDA, as external validation set. Large and small configurations of CLAM (Clustering-constrained Attention Multiple Instance Learning) models were evaluated with three different feature extractors (ResNet50, UNI, and CONCH). RNA-seq data were pre-processed both conventionally and using three autoencoder architectures. The processed transcript panels were input into machine learning (ML) models for mutation classification. Attention maps and SHAP were employed, highlighting significant features from both data modalities. A fusion layer or a voting mechanism combined the outputs from pathomic and transcriptomic models, obtaining a multimodal prediction. Performance comparisons were assessed by Area Under Receiver Operating Characteristic (AUROC) and Precision-Recall (AUPRC) curves. On the validation set, for <em>KRAS</em>, multimodal ML achieved 0.92 of AUROC and 0.98 of AUPRC. For <em>TP53</em>, the multimodal voting model achieved 0.75 of AUROC and 0.85 of AUPRC. For <em>SMAD4</em> and <em>CDKN2A</em>, transcriptomic ML models achieved AUROC of 0.71 and 0.65, while multimodal ML showed AUPRC of 0.39 and 0.37, respectively. This approach demonstrated the potential of combining pathomics with transcriptomics, offering an interpretable framework for predicting key genetic mutations in PDAC.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"123 ","pages":"Article 102526"},"PeriodicalIF":4.9000,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computerized Medical Imaging and Graphics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0895611125000357","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/3/15 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

In Pancreatic Ductal Adenocarcinoma (PDAC), predicting genetic mutations directly from histopathological images using Deep Learning can provide valuable insights. The combination of several omics can provide further knowledge on mechanisms underlying tumor biology. This study aimed at developing an explainable multimodal pipeline to predict genetic mutations for the KRAS, TP53, SMAD4, and CDKN2A genes, integrating pathomic features with transcriptomics from two independent datasets, the TCGA-PAAD, assumed as training set, and the CPTAC-PDA, as external validation set. Large and small configurations of CLAM (Clustering-constrained Attention Multiple Instance Learning) models were evaluated with three different feature extractors (ResNet50, UNI, and CONCH). RNA-seq data were pre-processed both conventionally and using three autoencoder architectures. The processed transcript panels were input into machine learning (ML) models for mutation classification. Attention maps and SHAP were employed, highlighting significant features from both data modalities. A fusion layer or a voting mechanism combined the outputs from pathomic and transcriptomic models, obtaining a multimodal prediction. Performance comparisons were assessed by Area Under Receiver Operating Characteristic (AUROC) and Precision-Recall (AUPRC) curves. On the validation set, for KRAS, multimodal ML achieved 0.92 of AUROC and 0.98 of AUPRC. For TP53, the multimodal voting model achieved 0.75 of AUROC and 0.85 of AUPRC. For SMAD4 and CDKN2A, transcriptomic ML models achieved AUROC of 0.71 and 0.65, while multimodal ML showed AUPRC of 0.39 and 0.37, respectively. This approach demonstrated the potential of combining pathomics with transcriptomics, offering an interpretable framework for predicting key genetic mutations in PDAC.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用于评估病理、转录组学和胰腺癌突变之间联系的多模式框架
在胰腺导管腺癌(PDAC)中,使用深度学习直接从组织病理学图像预测基因突变可以提供有价值的见解。几种组学的结合可以进一步了解肿瘤生物学的机制。本研究旨在建立一个可解释的多模式管道来预测KRAS、TP53、SMAD4和CDKN2A基因的基因突变,整合来自两个独立数据集的病理特征和转录组学,假设TCGA-PAAD作为训练集,CPTAC-PDA作为外部验证集。使用三种不同的特征提取器(ResNet50、UNI和CONCH)对CLAM(聚类约束注意多实例学习)模型的大小配置进行了评估。RNA-seq数据采用常规方法和三种自编码器结构进行预处理。处理后的转录板被输入到机器学习(ML)模型中进行突变分类。采用了注意图和SHAP,突出了两种数据模式的重要特征。融合层或投票机制将病态和转录组模型的输出结合起来,获得多模态预测。通过受试者工作特征下面积(AUROC)和精确召回率(AUPRC)曲线评估性能比较。在验证集上,对于KRAS,多模态ML达到AUROC的0.92和AUPRC的0.98。对于TP53,多模态投票模型的AUROC为0.75,AUPRC为0.85。对于SMAD4和CDKN2A,转录组ML模型的AUROC分别为0.71和0.65,而多模态ML模型的AUPRC分别为0.39和0.37。该方法显示了将病理学与转录组学相结合的潜力,为预测PDAC中的关键基因突变提供了一个可解释的框架。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
10.70
自引率
3.50%
发文量
71
审稿时长
26 days
期刊介绍: The purpose of the journal Computerized Medical Imaging and Graphics is to act as a source for the exchange of research results concerning algorithmic advances, development, and application of digital imaging in disease detection, diagnosis, intervention, prevention, precision medicine, and population health. Included in the journal will be articles on novel computerized imaging or visualization techniques, including artificial intelligence and machine learning, augmented reality for surgical planning and guidance, big biomedical data visualization, computer-aided diagnosis, computerized-robotic surgery, image-guided therapy, imaging scanning and reconstruction, mobile and tele-imaging, radiomics, and imaging integration and modeling with other information relevant to digital health. The types of biomedical imaging include: magnetic resonance, computed tomography, ultrasound, nuclear medicine, X-ray, microwave, optical and multi-photon microscopy, video and sensory imaging, and the convergence of biomedical images with other non-imaging datasets.
期刊最新文献
DiffusionTBAD: Rendering CTA images for type B aortic dissection diagnosis Photorealistic synthesis of oral lichen planus and lichenoid lesions enhances deep-learning segmentation in intra-oral photographs Multimodal data integration for early autism detection and LLM-driven personalized intervention: A review Adaptive multi-teacher knowledge distillation framework with foundation models for medical image analysis Efficient Deep Ladle-Net for fast universal 3D lesion segmentation on chest–abdomen–pelvis computed tomography
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1