首页 > 最新文献

Artificial Intelligence in Medicine最新文献

英文 中文
Privacy-preserving federated transfer learning for enhanced liver lesion segmentation in PET–CT imaging 保护隐私的联合迁移学习在PET-CT图像中增强肝脏病变分割
IF 6.2 2区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-28 DOI: 10.1016/j.artmed.2025.103245
Rajesh Kumar , Shaoning Zeng , Jay Kumar , Zakria , Xinfeng Mao
Positron Emission Tomography-Computed Tomography (PET–CT) evolution is critical for liver lesion diagnosis. However, data scarcity, privacy concerns, and cross-institutional imaging heterogeneity impede accurate deep learning model deployment. We propose a Federated Transfer Learning (FTL) framework that integrates federated learning’s privacy-preserving collaboration with transfer learning’s pre-trained model adaptation, enhancing liver lesion segmentation in PET–CT imaging. By leveraging a Feature Co-learning Block (FCB) and privacy-enhancing technologies (DP, HE), our approach ensures robust segmentation without sharing sensitive patient data. (1) A privacy-preserving FTL framework combining federated learning and adaptive transfer learning; (2) A multi-modal FCB for improved PET–CT feature integration; (3) Extensive evaluation across diverse institutions with privacy-enhancing technologies like Differential Privacy (DP) and Homomorphic Encryption (HE). Experiments on simulated multi-institutional PET–CT datasets demonstrate superior performance compared to baselines, with robust privacy guarantees. The FTL framework reduces data requirements and enhances generalizability, advancing liver lesion diagnostics.
正电子发射断层扫描-计算机断层扫描(PET-CT)的演变对肝脏病变的诊断至关重要。然而,数据稀缺、隐私问题和跨机构成像异质性阻碍了准确的深度学习模型部署。我们提出了一个联邦迁移学习(FTL)框架,该框架将联邦学习的隐私保护协作与迁移学习的预训练模型适应相结合,增强了PET-CT成像中的肝脏病变分割。通过利用特征共同学习块(FCB)和隐私增强技术(DP, HE),我们的方法确保在不共享敏感患者数据的情况下进行稳健的分割。(1)结合联邦学习和自适应迁移学习的隐私保护FTL框架;(2)改进PET-CT特征集成的多模态FCB;(3)利用差分隐私(DP)和同态加密(HE)等隐私增强技术对不同机构进行广泛评估。在模拟的多机构PET-CT数据集上进行的实验表明,与基线相比,该方法具有优越的性能,并具有鲁棒的隐私保证。FTL框架减少了数据需求,增强了通用性,推进了肝脏病变诊断。
{"title":"Privacy-preserving federated transfer learning for enhanced liver lesion segmentation in PET–CT imaging","authors":"Rajesh Kumar ,&nbsp;Shaoning Zeng ,&nbsp;Jay Kumar ,&nbsp;Zakria ,&nbsp;Xinfeng Mao","doi":"10.1016/j.artmed.2025.103245","DOIUrl":"10.1016/j.artmed.2025.103245","url":null,"abstract":"<div><div>Positron Emission Tomography-Computed Tomography (PET–CT) evolution is critical for liver lesion diagnosis. However, data scarcity, privacy concerns, and cross-institutional imaging heterogeneity impede accurate deep learning model deployment. We propose a Federated Transfer Learning (FTL) framework that integrates federated learning’s privacy-preserving collaboration with transfer learning’s pre-trained model adaptation, enhancing liver lesion segmentation in PET–CT imaging. By leveraging a Feature Co-learning Block (FCB) and privacy-enhancing technologies (DP, HE), our approach ensures robust segmentation without sharing sensitive patient data. (1) A privacy-preserving FTL framework combining federated learning and adaptive transfer learning; (2) A multi-modal FCB for improved PET–CT feature integration; (3) Extensive evaluation across diverse institutions with privacy-enhancing technologies like Differential Privacy (DP) and Homomorphic Encryption (HE). Experiments on simulated multi-institutional PET–CT datasets demonstrate superior performance compared to baselines, with robust privacy guarantees. The FTL framework reduces data requirements and enhances generalizability, advancing liver lesion diagnostics.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"169 ","pages":"Article 103245"},"PeriodicalIF":6.2,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144922204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physical foundations for trustworthy medical imaging: A survey for artificial intelligence researchers 可信医学成像的物理基础:对人工智能研究人员的调查
IF 6.2 2区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-26 DOI: 10.1016/j.artmed.2025.103251
Miriam Cobo , David Corral Fontecha , Wilson Silva , Lara Lloret Iglesias
Artificial intelligence in medical imaging has grown rapidly in the past decade, driven by advances in deep learning and widespread access to computing resources. Applications cover diverse imaging modalities, including those based on electromagnetic radiation (e.g., X-rays), subatomic particles (e.g., nuclear imaging), and acoustic waves (ultrasound). Each modality features and limitations are defined by its underlying physics. However, many artificial intelligence practitioners lack a solid understanding of the physical principles involved in medical image acquisition. This gap hinders leveraging the full potential of deep learning, as incorporating physics knowledge into artificial intelligence systems promotes trustworthiness, especially in limited data scenarios. This work reviews the fundamental physical concepts behind medical imaging and examines their influence on recent developments in artificial intelligence, particularly, generative models and reconstruction algorithms. Finally, we describe physics-informed machine learning approaches to improve feature learning in medical imaging.
在过去十年中,由于深度学习的进步和计算资源的广泛使用,医学成像领域的人工智能发展迅速。应用涵盖多种成像模式,包括基于电磁辐射(例如x射线)、亚原子粒子(例如核成像)和声波(超声波)的成像模式。每个模态的特征和限制都由其底层物理定义。然而,许多人工智能从业者对医学图像采集所涉及的物理原理缺乏扎实的理解。这一差距阻碍了充分利用深度学习的潜力,因为将物理知识纳入人工智能系统可以提高可信度,特别是在有限的数据场景中。这项工作回顾了医学成像背后的基本物理概念,并检查了它们对人工智能,特别是生成模型和重建算法的最新发展的影响。最后,我们描述了基于物理的机器学习方法,以改善医学成像中的特征学习。
{"title":"Physical foundations for trustworthy medical imaging: A survey for artificial intelligence researchers","authors":"Miriam Cobo ,&nbsp;David Corral Fontecha ,&nbsp;Wilson Silva ,&nbsp;Lara Lloret Iglesias","doi":"10.1016/j.artmed.2025.103251","DOIUrl":"10.1016/j.artmed.2025.103251","url":null,"abstract":"<div><div>Artificial intelligence in medical imaging has grown rapidly in the past decade, driven by advances in deep learning and widespread access to computing resources. Applications cover diverse imaging modalities, including those based on electromagnetic radiation (e.g., X-rays), subatomic particles (e.g., nuclear imaging), and acoustic waves (ultrasound). Each modality features and limitations are defined by its underlying physics. However, many artificial intelligence practitioners lack a solid understanding of the physical principles involved in medical image acquisition. This gap hinders leveraging the full potential of deep learning, as incorporating physics knowledge into artificial intelligence systems promotes trustworthiness, especially in limited data scenarios. This work reviews the fundamental physical concepts behind medical imaging and examines their influence on recent developments in artificial intelligence, particularly, generative models and reconstruction algorithms. Finally, we describe physics-informed machine learning approaches to improve feature learning in medical imaging.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"169 ","pages":"Article 103251"},"PeriodicalIF":6.2,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144917649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TIPs: Tooth instance and pulp segmentation based on hierarchical extraction and fusion of anatomical priors from cone-beam CT TIPs:基于锥形束CT解剖先验信息的分层提取和融合的牙实体和牙髓分割
IF 6.2 2区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-23 DOI: 10.1016/j.artmed.2025.103247
Tao Zhong , Yang Ning , Xueyang Wu , Li Ye , Chichi Li , Yu Zhang , Yu Du
Accurate instance segmentation of tooth and pulp from cone-beam computed tomography (CBCT) images is essential but highly challenging due to the pulp’s small structures and indistinct boundaries. To address these critical challenges, we propose TIPs designed for Tooth Instance and Pulp segmentation. TIPs initially employs a backbone model to segment a binary mask of the tooth from CBCT images, which is then utilized to derive position prior of the tooth and shape prior of the pulp. Subsequently, we propose the Hierarchical Fusion Mamba models to leverage the strengths of both anatomical priors and CBCT images by extracting and integrating shallow and deep features from Convolution Neural Networks (CNNs) and State Space Sequence Models (SSMs), respectively. This process achieves tooth instance and pulp segmentation, which are then combined to obtain the final pulp instance segmentation. Extensive experiments on CBCT scans from 147 patients demonstrate that TIPs significantly outperforms state-of-the-art methods in terms of segmentation accuracy. Furthermore, we have encapsulated this framework into an openly accessible tool for one-click using. To our knowledge, this is the first toolbox capable of segmentation of tooth and pulp instances, with its performance validated on two external datasets comprising 59 samples from the Toothfairy2 dataset and 48 samples from the STS dataset. These results demonstrate the potential of TIPs as a practical tool to boost clinical workflows in digital dentistry, enhancing the precision and efficiency of dental diagnostics and treatment planning.
由于牙髓结构小,边界模糊,因此对牙髓和牙髓进行精确的分割是非常必要的,但也是非常具有挑战性的。为了解决这些关键的挑战,我们提出了为牙齿实例和牙髓分割设计的TIPs。TIPs首先使用骨干模型从CBCT图像中分割牙齿的二值掩模,然后利用该掩模获得牙齿的位置先验和牙髓的形状先验。随后,我们提出了分层融合曼巴模型,利用解剖先验和CBCT图像的优势,分别从卷积神经网络(cnn)和状态空间序列模型(SSMs)中提取和整合浅层和深层特征。该过程实现了牙体和牙髓的分割,然后将两者结合起来得到最终的牙髓分割。对147例患者的CBCT扫描进行的大量实验表明,TIPs在分割精度方面明显优于最先进的方法。此外,我们已经将这个框架封装成一个公开访问的工具,供一键使用。据我们所知,这是第一个能够分割牙齿和牙髓实例的工具箱,其性能在两个外部数据集上进行了验证,这些数据集包括来自Toothfairy2数据集的59个样本和来自STS数据集的48个样本。这些结果证明了TIPs作为一种实用工具的潜力,可以促进数字牙科临床工作流程,提高牙科诊断和治疗计划的准确性和效率。
{"title":"TIPs: Tooth instance and pulp segmentation based on hierarchical extraction and fusion of anatomical priors from cone-beam CT","authors":"Tao Zhong ,&nbsp;Yang Ning ,&nbsp;Xueyang Wu ,&nbsp;Li Ye ,&nbsp;Chichi Li ,&nbsp;Yu Zhang ,&nbsp;Yu Du","doi":"10.1016/j.artmed.2025.103247","DOIUrl":"10.1016/j.artmed.2025.103247","url":null,"abstract":"<div><div>Accurate instance segmentation of tooth and pulp from cone-beam computed tomography (CBCT) images is essential but highly challenging due to the pulp’s small structures and indistinct boundaries. To address these critical challenges, we propose TIPs designed for <u>T</u>ooth <u>I</u>nstance and <u>P</u>ulp <u>s</u>egmentation. TIPs initially employs a backbone model to segment a binary mask of the tooth from CBCT images, which is then utilized to derive position prior of the tooth and shape prior of the pulp. Subsequently, we propose the Hierarchical Fusion Mamba models to leverage the strengths of both anatomical priors and CBCT images by extracting and integrating shallow and deep features from Convolution Neural Networks (CNNs) and State Space Sequence Models (SSMs), respectively. This process achieves tooth instance and pulp segmentation, which are then combined to obtain the final pulp instance segmentation. Extensive experiments on CBCT scans from 147 patients demonstrate that TIPs significantly outperforms state-of-the-art methods in terms of segmentation accuracy. Furthermore, we have encapsulated this framework into an openly accessible tool for one-click using. To our knowledge, this is the first toolbox capable of segmentation of tooth and pulp instances, with its performance validated on two external datasets comprising 59 samples from the Toothfairy2 dataset and 48 samples from the STS dataset. These results demonstrate the potential of TIPs as a practical tool to boost clinical workflows in digital dentistry, enhancing the precision and efficiency of dental diagnostics and treatment planning.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"169 ","pages":"Article 103247"},"PeriodicalIF":6.2,"publicationDate":"2025-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144917650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiplex aggregation combining sample reweight composite network for pathology image segmentation 基于多重聚合的样本重权复合网络病理图像分割
IF 6.2 2区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-22 DOI: 10.1016/j.artmed.2025.103239
Dawei Fan , Zhuo Chen , Yifan Gao , Jiaming Yu , Kaibin Li , Yi Wei , Yanping Chen , Riqing Chen , Lifang Wei
In digital pathology, nuclei segmentation is a critical task for pathological image analysis, holding significant importance for diagnosis and research. However, challenges such as blurred boundaries between nuclei and background regions, domain shifts between pathological images, and uneven distribution of nuclei pose significant obstacles to segmentation tasks. To address these issues, we propose an innovative Causal inference inspired Diversified aggregation convolution Network named CDNet, which integrates a Diversified Aggregation Convolution (DAC), a Causal Inference Module (CIM) based on causal discovery principles, and a comprehensive loss function. DAC improves the issue of unclear boundaries between nuclei and background regions, and CIM enhances the model’s cross-domain generalization ability. A novel Stable-Weighted Combined loss function was designed that combined the chunk-computed Dice Loss with the Focal Loss and the Causal Inference Loss to address the issue of uneven nuclei distribution. Experimental evaluations on the MoNuSeg, GLySAC, and MoNuSAC datasets demonstrate that CDNet significantly outperforms other models and exhibits strong generalization capabilities. Specifically, CDNet outperforms the second-best model by 0.79% (mIoU) and 1.32% (DSC) on the MoNuSeg dataset, by 2.65% (mIoU) and 2.13% (DSC) on the GLySAC dataset, and by 1.54% (mIoU) and 1.10% (DSC) on the MoNuSAC dataset. Code is publicly available at https://github.com/7FFDW/CDNet.
在数字病理中,细胞核分割是病理图像分析的一项关键任务,对诊断和研究具有重要意义。然而,核与背景区域之间的边界模糊、病理图像之间的域转移以及核分布不均匀等挑战对分割任务构成了重大障碍。为了解决这些问题,我们提出了一种创新的基于因果推理的多元聚合卷积网络CDNet,该网络集成了多元聚合卷积(DAC)、基于因果发现原则的因果推理模块(CIM)和综合损失函数。DAC改进了核与背景区域边界不清的问题,CIM增强了模型的跨域泛化能力。为了解决核分布不均匀的问题,设计了一种新的稳定加权组合损失函数,将块计算的骰子损失与焦点损失和因果推理损失相结合。在MoNuSeg、GLySAC和MoNuSAC数据集上的实验评估表明,CDNet显著优于其他模型,并表现出强大的泛化能力。具体来说,CDNet在MoNuSeg数据集上比第二好的模型高出0.79% (mIoU)和1.32% (DSC),在GLySAC数据集上高出2.65% (mIoU)和2.13% (DSC),在MoNuSAC数据集上高出1.54% (mIoU)和1.10% (DSC)。代码可在https://github.com/7FFDW/CDNet上公开获取。
{"title":"Multiplex aggregation combining sample reweight composite network for pathology image segmentation","authors":"Dawei Fan ,&nbsp;Zhuo Chen ,&nbsp;Yifan Gao ,&nbsp;Jiaming Yu ,&nbsp;Kaibin Li ,&nbsp;Yi Wei ,&nbsp;Yanping Chen ,&nbsp;Riqing Chen ,&nbsp;Lifang Wei","doi":"10.1016/j.artmed.2025.103239","DOIUrl":"10.1016/j.artmed.2025.103239","url":null,"abstract":"<div><div>In digital pathology, nuclei segmentation is a critical task for pathological image analysis, holding significant importance for diagnosis and research. However, challenges such as blurred boundaries between nuclei and background regions, domain shifts between pathological images, and uneven distribution of nuclei pose significant obstacles to segmentation tasks. To address these issues, we propose an innovative Causal inference inspired Diversified aggregation convolution Network named CDNet, which integrates a Diversified Aggregation Convolution (DAC), a Causal Inference Module (CIM) based on causal discovery principles, and a comprehensive loss function. DAC improves the issue of unclear boundaries between nuclei and background regions, and CIM enhances the model’s cross-domain generalization ability. A novel Stable-Weighted Combined loss function was designed that combined the chunk-computed Dice Loss with the Focal Loss and the Causal Inference Loss to address the issue of uneven nuclei distribution. Experimental evaluations on the MoNuSeg, GLySAC, and MoNuSAC datasets demonstrate that CDNet significantly outperforms other models and exhibits strong generalization capabilities. Specifically, CDNet outperforms the second-best model by 0.79% (mIoU) and 1.32% (DSC) on the MoNuSeg dataset, by 2.65% (mIoU) and 2.13% (DSC) on the GLySAC dataset, and by 1.54% (mIoU) and 1.10% (DSC) on the MoNuSAC dataset. Code is publicly available at <span><span>https://github.com/7FFDW/CDNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"169 ","pages":"Article 103239"},"PeriodicalIF":6.2,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144895439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unprepared and overwhelmed: A case for clinician-focused AI education 毫无准备和不知所措:一个以临床医生为中心的人工智能教育案例
IF 6.2 2区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-22 DOI: 10.1016/j.artmed.2025.103252
Nadia Siddiqui , Yazan Bouchi , Ellen Kim , Jonathan D. Hron , John Park , John Kang
This perspective illustrates the need for improved AI education for clinicians, highlighting gaps in current approaches and technical content. It advocates for the creation of AI guides specifically designed for clinicians integrating case-based learning approaches and led by clinical informaticians. We emphasize the importance of modern medical educational strategies, and reflect on relevance and applicability of AI education, to ensure clinicians are prepared for safe, effective, and efficient AI-driven healthcare.

1–2 Sentence description

This position article reflects on the current landscape of AI educational guides for clinicians, identifying gaps in instructional approaches and technical content. We propose the development of case-based AI education modules led by clinical informatics physicians in collaboration with professional societies.
这一观点说明了改善临床医生人工智能教育的必要性,突出了当前方法和技术内容的差距。它倡导创建专门为临床医生设计的人工智能指南,整合基于病例的学习方法,并由临床信息学家领导。我们强调现代医学教育策略的重要性,并反思人工智能教育的相关性和适用性,以确保临床医生为安全、有效和高效的人工智能驱动医疗做好准备。这篇文章反映了临床医生人工智能教育指南的现状,确定了教学方法和技术内容方面的差距。我们建议开发基于案例的人工智能教育模块,由临床信息学医生与专业协会合作领导。
{"title":"Unprepared and overwhelmed: A case for clinician-focused AI education","authors":"Nadia Siddiqui ,&nbsp;Yazan Bouchi ,&nbsp;Ellen Kim ,&nbsp;Jonathan D. Hron ,&nbsp;John Park ,&nbsp;John Kang","doi":"10.1016/j.artmed.2025.103252","DOIUrl":"10.1016/j.artmed.2025.103252","url":null,"abstract":"<div><div>This perspective illustrates the need for improved AI education for clinicians, highlighting gaps in current approaches and technical content. It advocates for the creation of AI guides specifically designed for clinicians integrating case-based learning approaches and led by clinical informaticians. We emphasize the importance of modern medical educational strategies, and reflect on relevance and applicability of AI education, to ensure clinicians are prepared for safe, effective, and efficient AI-driven healthcare.</div></div><div><h3>1–2 Sentence description</h3><div>This position article reflects on the current landscape of AI educational guides for clinicians, identifying gaps in instructional approaches and technical content. We propose the development of case-based AI education modules led by clinical informatics physicians in collaboration with professional societies.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"169 ","pages":"Article 103252"},"PeriodicalIF":6.2,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144891931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EvidenceMap: Learning evidence analysis to unleash the power of small language models for biomedical question answering 证据地图:学习证据分析,释放生物医学问题回答的小语言模型的力量
IF 6.2 2区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-19 DOI: 10.1016/j.artmed.2025.103246
Chang Zong , Jian Wan , Siliang Tang , Lei Zhang
When addressing professional questions in the biomedical domain, humans typically acquire multiple pieces of information as evidence and engage in multifaceted analysis to provide high-quality answers. Current LLM-based question answering methods lack a detailed definition and learning process for evidence analysis, leading to the risk of error propagation and hallucinations while using evidence. Although increasing the parameter size of LLMs can alleviate these issues, it also presents challenges in training and deployment with limited resources. In this study, we propose EvidenceMap, which aims to enable a lightweight pre-trained language model to explicitly learn multiple aspects of biomedical evidence, including supportive evaluation, logical correlation and content summarization, thereby latently guiding a generative model (around 3B parameters) to provide textual responses. Experimental results demonstrate that our method, learning evidence analysis by fine-tuning a model with only 66M parameters, exceeds the RAG method with an 8B LLM by 19.9% and 5.7% in reference-based quality and accuracy, respectively. The code and dataset for reproducing our framework and experiments are available at https://github.com/ZUST-BIT/EvidenceMap.
在解决生物医学领域的专业问题时,人类通常会获取多个信息作为证据,并进行多方面的分析,以提供高质量的答案。目前基于法学硕士的问答方法缺乏详细的定义和证据分析的学习过程,导致在使用证据时存在错误传播和幻觉的风险。虽然增加llm的参数大小可以缓解这些问题,但在资源有限的情况下,这也给培训和部署带来了挑战。在本研究中,我们提出了EvidenceMap,旨在使一个轻量级的预训练语言模型能够明确地学习生物医学证据的多个方面,包括支持性评估、逻辑关联和内容总结,从而潜在地指导生成模型(约3B个参数)提供文本响应。实验结果表明,我们的方法通过对只有66M个参数的模型进行微调来学习证据分析,在基于参考的质量和准确性方面分别比具有8B LLM的RAG方法高19.9%和5.7%。用于重现我们的框架和实验的代码和数据集可在https://github.com/ZUST-BIT/EvidenceMap上获得。
{"title":"EvidenceMap: Learning evidence analysis to unleash the power of small language models for biomedical question answering","authors":"Chang Zong ,&nbsp;Jian Wan ,&nbsp;Siliang Tang ,&nbsp;Lei Zhang","doi":"10.1016/j.artmed.2025.103246","DOIUrl":"10.1016/j.artmed.2025.103246","url":null,"abstract":"<div><div>When addressing professional questions in the biomedical domain, humans typically acquire multiple pieces of information as evidence and engage in multifaceted analysis to provide high-quality answers. Current LLM-based question answering methods lack a detailed definition and learning process for evidence analysis, leading to the risk of error propagation and hallucinations while using evidence. Although increasing the parameter size of LLMs can alleviate these issues, it also presents challenges in training and deployment with limited resources. In this study, we propose <strong><span>EvidenceMap</span></strong>, which aims to enable a lightweight pre-trained language model to explicitly learn multiple aspects of biomedical evidence, including supportive evaluation, logical correlation and content summarization, thereby latently guiding a generative model (around 3B parameters) to provide textual responses. Experimental results demonstrate that our method, learning evidence analysis by fine-tuning a model with only 66M parameters, exceeds the RAG method with an 8B LLM by 19.9% and 5.7% in reference-based quality and accuracy, respectively. The code and dataset for reproducing our framework and experiments are available at <span><span>https://github.com/ZUST-BIT/EvidenceMap</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"169 ","pages":"Article 103246"},"PeriodicalIF":6.2,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144903105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Difficulty-aware coupled contour regression network with IoU loss for efficient IVUS delineation 具有IoU损失的困难感知耦合轮廓回归网络用于有效的IVUS描绘
IF 6.2 2区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-18 DOI: 10.1016/j.artmed.2025.103240
Yuan Yang , Xu Yu , Wei Yu , Shengxian Tu , Su Zhang , Wei Yang
The lumen and external elastic lamina contour delineation is crucial for quantitative analyses of intravascular ultrasound (IVUS) images. However, the various artifacts in IVUS images pose substantial challenges for accurate delineation. Existing mask-based methods often produce anatomically implausible contours in artifact-affected images, while contour-based methods suffer from the over-smooth problem within the artifact regions. In this paper, we directly regress the contour pairs instead of mask-based segmentation. A coupled contour representation is adopted to learn a low-dimensional contour signature space, where the embedded anatomical prior enables the model to avoid producing unreasonable results. Further, a PIoU loss is proposed to capture the overall shape of the contour points and maximize the similarity between the regressed contours and manually delineated contours with various irregular shapes, alleviating the over-smooth problem. For the images with severe artifacts, a difficulty-aware training strategy is designed for contour regression, which gradually guides the model focus on hard samples and improves contour localization accuracy. We evaluate the proposed framework on a large IVUS dataset, consisting of 7204 frames from 185 pullbacks. The mean Dice similarity coefficients of the method for the lumen and external elastic lamina are 0.951 and 0.967, which significantly outperforms other state-of-the-art (SOTA) models. All regressed contours in the test images are anatomically plausible. On the public IVUS-2011 dataset, the proposed method attains comparable performance to the SOTA models with the highest processing speed at 100 fps. The code is available at https://github.com/SMU-MedicalVision/ContourRegression.
管腔和外弹性层轮廓的划定是至关重要的定量分析血管内超声(IVUS)图像。然而,IVUS图像中的各种伪影对准确描绘构成了实质性的挑战。现有的基于掩模的方法通常会在受伪影影响的图像中产生解剖学上不可信的轮廓,而基于轮廓的方法则存在伪影区域内过于光滑的问题。在本文中,我们直接回归轮廓对而不是基于掩码的分割。采用耦合轮廓表示学习低维轮廓特征空间,其中嵌入的解剖先验使模型避免产生不合理的结果。进一步,提出PIoU损失来捕获轮廓点的整体形状,并最大限度地提高回归轮廓与人工绘制的各种不规则形状轮廓之间的相似性,从而缓解过光滑问题。对于存在严重伪像的图像,设计了难度感知训练策略进行轮廓回归,逐步引导模型聚焦于硬样本,提高轮廓定位精度。我们在一个大型IVUS数据集上评估了提议的框架,该数据集由来自185个回调的7204帧组成。该方法的内腔和外弹性层的平均Dice相似系数分别为0.951和0.967,显著优于其他最先进的(SOTA)模型。测试图像中的所有回归轮廓在解剖学上都是合理的。在公开的IVUS-2011数据集上,该方法达到了与SOTA模型相当的性能,最高处理速度为100 fps。代码可在https://github.com/SMU-MedicalVision/ContourRegression上获得。
{"title":"Difficulty-aware coupled contour regression network with IoU loss for efficient IVUS delineation","authors":"Yuan Yang ,&nbsp;Xu Yu ,&nbsp;Wei Yu ,&nbsp;Shengxian Tu ,&nbsp;Su Zhang ,&nbsp;Wei Yang","doi":"10.1016/j.artmed.2025.103240","DOIUrl":"10.1016/j.artmed.2025.103240","url":null,"abstract":"<div><div>The lumen and external elastic lamina contour delineation is crucial for quantitative analyses of intravascular ultrasound (IVUS) images. However, the various artifacts in IVUS images pose substantial challenges for accurate delineation. Existing mask-based methods often produce anatomically implausible contours in artifact-affected images, while contour-based methods suffer from the over-smooth problem within the artifact regions. In this paper, we directly regress the contour pairs instead of mask-based segmentation. A coupled contour representation is adopted to learn a low-dimensional contour signature space, where the embedded anatomical prior enables the model to avoid producing unreasonable results. Further, a PIoU loss is proposed to capture the overall shape of the contour points and maximize the similarity between the regressed contours and manually delineated contours with various irregular shapes, alleviating the over-smooth problem. For the images with severe artifacts, a difficulty-aware training strategy is designed for contour regression, which gradually guides the model focus on hard samples and improves contour localization accuracy. We evaluate the proposed framework on a large IVUS dataset, consisting of 7204 frames from 185 pullbacks. The mean Dice similarity coefficients of the method for the lumen and external elastic lamina are 0.951 and 0.967, which significantly outperforms other state-of-the-art (SOTA) models. All regressed contours in the test images are anatomically plausible. On the public IVUS-2011 dataset, the proposed method attains comparable performance to the SOTA models with the highest processing speed at 100 fps. The code is available at <span><span>https://github.com/SMU-MedicalVision/ContourRegression</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"169 ","pages":"Article 103240"},"PeriodicalIF":6.2,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144864475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BIGPN: Biologically informed graph propagational network for plasma proteomic profiling of neurodegenerative biomarkers BIGPN:用于神经退行性生物标志物血浆蛋白质组学分析的生物学信息图传播网络
IF 6.2 2区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-15 DOI: 10.1016/j.artmed.2025.103241
Sunghong Park , Dong-gi Lee , Juhyeon Kim , Masaud Shah , Hyunjung Shin , Hyun Goo Woo
Neurodegenerative diseases involve progressive neuronal dysfunction, requiring identification of specific pathological features for accurate diagnosis. Although cerebrospinal fluid analysis and neuroimaging are commonly employed, their invasiveness and high-cost limit widespread clinical use. In contrast, blood-based biomarkers offer a non-invasive, cost-effective, and accessible alternative. Recent advances in plasma proteomics combined with machine learning (ML) have further improved diagnostic accuracy; however, the integration of underlying biological information remains largely overlooked. Notably, many ML-based plasma proteomic profiling approaches overlook protein-protein interactions (PPI) and the hierarchical structure of molecular pathways. To address these limitations, we propose Biologically Informed Graph Propagational Network (BIGPN), a novel ML model for plasma proteomic profiling of neurodegenerative biomarkers. BIGPN employs graph neural network-based architecture to harness a PPI network and propagates independent effects of proteins through the PPI network, capturing higher-order interactions with global awareness of PPIs. BIGPN then applies a multi-level pathway structure to extract biologically meaningful feature representations, ensuring that the model reflects structured biological mechanisms, and it provides clear explainability of the pathway structure in the context of importance through probabilistically represented parameters. Experimental validation on the UK Biobank dataset demonstrated the superior performance of BIGPN in neurodegenerative risk prediction, outperforming comparison methods. Furthermore, the explainability of BIGPN facilitated detailed analyses of the discriminative significance of synergistic effects, the predictive importance of proteins, and the longitudinal changes in biomarker profiles, reinforcing its clinical relevance. Overall, BIGPN's integration of PPIs and pathway structure addresses critical gaps in ML-based plasma proteomic profiling, offering a powerful approach for improved neurodegenerative disease diagnosis.
神经退行性疾病涉及进行性神经元功能障碍,需要识别特定的病理特征才能准确诊断。虽然脑脊液分析和神经成像是常用的,但它们的侵入性和高成本限制了广泛的临床应用。相比之下,基于血液的生物标志物提供了一种非侵入性、成本效益高、可获得的替代方法。血浆蛋白质组学与机器学习(ML)相结合的最新进展进一步提高了诊断准确性;然而,潜在生物信息的整合在很大程度上仍然被忽视。值得注意的是,许多基于ml的血浆蛋白质组学分析方法忽略了蛋白质-蛋白质相互作用(PPI)和分子途径的层次结构。为了解决这些限制,我们提出了生物信息图传播网络(BIGPN),这是一种新的ML模型,用于神经退行性生物标志物的血浆蛋白质组学分析。BIGPN采用基于图神经网络的架构来利用PPI网络,并通过PPI网络传播蛋白质的独立效应,捕获与PPI全局感知的高阶相互作用。然后,BIGPN应用多层次的通路结构来提取生物学上有意义的特征表示,确保模型反映结构化的生物学机制,并通过概率表示的参数在重要性上下文中提供通路结构的清晰可解释性。在UK Biobank数据集上的实验验证表明,BIGPN在神经退行性疾病风险预测方面具有优越的性能,优于比较方法。此外,BIGPN的可解释性有助于详细分析协同效应的区别意义、蛋白质的预测重要性以及生物标志物谱的纵向变化,从而加强其临床相关性。总体而言,BIGPN整合PPIs和通路结构解决了基于ml的血浆蛋白质组学分析的关键空白,为改善神经退行性疾病的诊断提供了强有力的方法。
{"title":"BIGPN: Biologically informed graph propagational network for plasma proteomic profiling of neurodegenerative biomarkers","authors":"Sunghong Park ,&nbsp;Dong-gi Lee ,&nbsp;Juhyeon Kim ,&nbsp;Masaud Shah ,&nbsp;Hyunjung Shin ,&nbsp;Hyun Goo Woo","doi":"10.1016/j.artmed.2025.103241","DOIUrl":"10.1016/j.artmed.2025.103241","url":null,"abstract":"<div><div>Neurodegenerative diseases involve progressive neuronal dysfunction, requiring identification of specific pathological features for accurate diagnosis. Although cerebrospinal fluid analysis and neuroimaging are commonly employed, their invasiveness and high-cost limit widespread clinical use. In contrast, blood-based biomarkers offer a non-invasive, cost-effective, and accessible alternative. Recent advances in plasma proteomics combined with machine learning (ML) have further improved diagnostic accuracy; however, the integration of underlying biological information remains largely overlooked. Notably, many ML-based plasma proteomic profiling approaches overlook protein-protein interactions (PPI) and the hierarchical structure of molecular pathways. To address these limitations, we propose Biologically Informed Graph Propagational Network (BIGPN), a novel ML model for plasma proteomic profiling of neurodegenerative biomarkers. BIGPN employs graph neural network-based architecture to harness a PPI network and propagates independent effects of proteins through the PPI network, capturing higher-order interactions with global awareness of PPIs. BIGPN then applies a multi-level pathway structure to extract biologically meaningful feature representations, ensuring that the model reflects structured biological mechanisms, and it provides clear explainability of the pathway structure in the context of importance through probabilistically represented parameters. Experimental validation on the UK Biobank dataset demonstrated the superior performance of BIGPN in neurodegenerative risk prediction, outperforming comparison methods. Furthermore, the explainability of BIGPN facilitated detailed analyses of the discriminative significance of synergistic effects, the predictive importance of proteins, and the longitudinal changes in biomarker profiles, reinforcing its clinical relevance. Overall, BIGPN's integration of PPIs and pathway structure addresses critical gaps in ML-based plasma proteomic profiling, offering a powerful approach for improved neurodegenerative disease diagnosis.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"169 ","pages":"Article 103241"},"PeriodicalIF":6.2,"publicationDate":"2025-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144886979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging explainable artificial intelligence for transparent and trustworthy cancer detection systems 利用可解释的人工智能为透明和值得信赖的癌症检测系统
IF 6.2 2区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-14 DOI: 10.1016/j.artmed.2025.103243
Shiva Toumaj , Arash Heidari , Nima Jafari Navimipour
Timely detection of cancer is essential for enhancing patient outcomes. Artificial Intelligence (AI), especially Deep Learning (DL), demonstrates significant potential in cancer diagnostics; however, its opaque nature presents notable concerns. Explainable AI (XAI) mitigates these issues by improving transparency and interpretability. This study provides a systematic review of recent applications of XAI in cancer detection, categorizing the techniques according to cancer type, including breast, skin, lung, colorectal, brain, and others. It emphasizes interpretability methods, dataset utilization, simulation environments, and security considerations. The results indicate that Convolutional Neural Networks (CNNs) account for 31 % of model usage, SHAP is the predominant interpretability framework at 44.4 %, and Python is the leading programming language at 32.1 %. Only 7.4 % of studies address security issues. This study identifies significant challenges and gaps, guiding future research in trustworthy and interpretable AI within oncology.
及时发现癌症对于提高患者预后至关重要。人工智能(AI),尤其是深度学习(DL),在癌症诊断方面显示出巨大的潜力;然而,其不透明的性质带来了值得关注的问题。可解释AI (XAI)通过提高透明度和可解释性来缓解这些问题。本研究系统综述了近年来XAI在癌症检测中的应用,并根据癌症类型对其进行了分类,包括乳腺癌、皮肤癌、肺癌、结肠直肠癌、脑癌等。它强调了可解释性方法、数据集利用、模拟环境和安全考虑。结果表明,卷积神经网络(cnn)占模型使用的31%,SHAP是主要的可解释性框架,占44.4%,Python是主要的编程语言,占32.1%。只有7.4%的研究涉及安全问题。这项研究确定了重大挑战和差距,指导未来在肿瘤学领域可靠和可解释的人工智能研究。
{"title":"Leveraging explainable artificial intelligence for transparent and trustworthy cancer detection systems","authors":"Shiva Toumaj ,&nbsp;Arash Heidari ,&nbsp;Nima Jafari Navimipour","doi":"10.1016/j.artmed.2025.103243","DOIUrl":"10.1016/j.artmed.2025.103243","url":null,"abstract":"<div><div>Timely detection of cancer is essential for enhancing patient outcomes. Artificial Intelligence (AI), especially Deep Learning (DL), demonstrates significant potential in cancer diagnostics; however, its opaque nature presents notable concerns. Explainable AI (XAI) mitigates these issues by improving transparency and interpretability. This study provides a systematic review of recent applications of XAI in cancer detection, categorizing the techniques according to cancer type, including breast, skin, lung, colorectal, brain, and others. It emphasizes interpretability methods, dataset utilization, simulation environments, and security considerations. The results indicate that Convolutional Neural Networks (CNNs) account for 31 % of model usage, SHAP is the predominant interpretability framework at 44.4 %, and Python is the leading programming language at 32.1 %. Only 7.4 % of studies address security issues. This study identifies significant challenges and gaps, guiding future research in trustworthy and interpretable AI within oncology.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"169 ","pages":"Article 103243"},"PeriodicalIF":6.2,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144864473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diagnostic performance of artificial intelligence in detecting and subtyping pediatric medulloblastoma from histopathological images: A systematic review 人工智能在小儿髓母细胞瘤组织病理学图像检测和分型中的诊断性能:系统综述
IF 6.2 2区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-14 DOI: 10.1016/j.artmed.2025.103237
Hiba Alzoubi , Alaa Abd-alrazaq , Obada Almaabreh , Rawan AlSaad , Sarah Aziz , Rukaya Al-Dafi , Leen Abu Salih , Leen Turani , Sondos Albqowr , Rawan Abu Tarbosh , Batool Abu Alkishik , Rafat Damseh , Arfan Ahmed , Hashem Abu Serhan

Background

Medulloblastoma is the most prevalent malignant brain tumor in children, requiring timely and precise diagnosis to improve clinical outcomes. Artificial Intelligence (AI) offers a promising avenue to enhance diagnostic accuracy and efficiency in this domain.

Objective

This systematic review evaluates the performance of AI models in detecting and subtyping medulloblastomas using histopathological images.

Methods

In this systematic review, we searched seven databases to identify English-language studies assessing AI-based detection or classification of medulloblastomas in patients under 18 years. Two reviewers independently conducted study selection, data extraction, and risk of bias assessment. Results were synthesized narratively.

Results

Of 3341 records, 15 studies met inclusion criteria. AI models demonstrated strong diagnostic performance, with mean accuracy of 91.3 %, sensitivity of 94.2 %, and specificity of 97.4 %. Support Vector Machines achieved the highest accuracy (96.3 %) and specificity (99.4 %), while K-Nearest Neighbors showed the highest sensitivity (97.1 %). Detection tasks (accuracy 96.1 %, sensitivity 98.5 %) outperformed subtyping tasks (accuracy 87.3 %, sensitivity 91.3 %). Models analyzing images at the architectural level yielded higher accuracy (94.7 %), sensitivity (94.1 %), and specificity (98.2 %) compared to cellular-level analysis.

Conclusion

AI algorithms show promise in detecting and subtyping medulloblastomas, but the findings are limited by overreliance on one dataset, small sample sizes, limited study numbers, and lack of meta-analysis Future research should develop larger, more diverse datasets and explore advanced approaches like deep learning and foundation models. Techniques (e.g., model ensembling and multimodal data integration) are needed for better multiclass classification. Further reviews are needed to assess AI's role in other pediatric brain tumors.
髓母细胞瘤是儿童中最常见的恶性脑肿瘤,需要及时准确的诊断以改善临床预后。人工智能(AI)为提高该领域的诊断准确性和效率提供了一条有前途的途径。目的评价人工智能模型在髓母细胞瘤病理图像检测和分型中的应用。方法在本系统综述中,我们检索了7个数据库,以确定评估18岁以下患者髓母细胞瘤基于人工智能检测或分类的英语研究。两名审稿人独立进行研究选择、数据提取和偏倚风险评估。对结果进行叙述性综合。结果在3341份文献中,有15项研究符合纳入标准。人工智能模型表现出较强的诊断性能,平均准确率为91.3%,灵敏度为94.2%,特异性为97.4%。支持向量机获得了最高的准确率(96.3%)和特异性(99.4%),而k近邻显示了最高的灵敏度(97.1%)。检测任务(准确率96.1%,灵敏度98.5%)优于亚型分型任务(准确率87.3%,灵敏度91.3%)。与细胞水平分析相比,在建筑水平分析图像的模型产生了更高的准确性(94.7%)、灵敏度(94.1%)和特异性(98.2%)。结论人工智能算法在髓母细胞瘤的检测和分型方面显示出前景,但研究结果受到过度依赖单一数据集、样本量小、研究数量有限和缺乏荟萃分析的限制,未来的研究应开发更大、更多样化的数据集,并探索深度学习和基础模型等先进方法。为了更好地进行多类分类,需要一些技术(如模型集成和多模态数据集成)。需要进一步评估人工智能在其他儿童脑肿瘤中的作用。
{"title":"Diagnostic performance of artificial intelligence in detecting and subtyping pediatric medulloblastoma from histopathological images: A systematic review","authors":"Hiba Alzoubi ,&nbsp;Alaa Abd-alrazaq ,&nbsp;Obada Almaabreh ,&nbsp;Rawan AlSaad ,&nbsp;Sarah Aziz ,&nbsp;Rukaya Al-Dafi ,&nbsp;Leen Abu Salih ,&nbsp;Leen Turani ,&nbsp;Sondos Albqowr ,&nbsp;Rawan Abu Tarbosh ,&nbsp;Batool Abu Alkishik ,&nbsp;Rafat Damseh ,&nbsp;Arfan Ahmed ,&nbsp;Hashem Abu Serhan","doi":"10.1016/j.artmed.2025.103237","DOIUrl":"10.1016/j.artmed.2025.103237","url":null,"abstract":"<div><h3>Background</h3><div>Medulloblastoma is the most prevalent malignant brain tumor in children, requiring timely and precise diagnosis to improve clinical outcomes. Artificial Intelligence (AI) offers a promising avenue to enhance diagnostic accuracy and efficiency in this domain.</div></div><div><h3>Objective</h3><div>This systematic review evaluates the performance of AI models in detecting and subtyping medulloblastomas using histopathological images.</div></div><div><h3>Methods</h3><div>In this systematic review, we searched seven databases to identify English-language studies assessing AI-based detection or classification of medulloblastomas in patients under 18 years. Two reviewers independently conducted study selection, data extraction, and risk of bias assessment. Results were synthesized narratively.</div></div><div><h3>Results</h3><div>Of 3341 records, 15 studies met inclusion criteria. AI models demonstrated strong diagnostic performance, with mean accuracy of 91.3 %, sensitivity of 94.2 %, and specificity of 97.4 %. Support Vector Machines achieved the highest accuracy (96.3 %) and specificity (99.4 %), while K-Nearest Neighbors showed the highest sensitivity (97.1 %). Detection tasks (accuracy 96.1 %, sensitivity 98.5 %) outperformed subtyping tasks (accuracy 87.3 %, sensitivity 91.3 %). Models analyzing images at the architectural level yielded higher accuracy (94.7 %), sensitivity (94.1 %), and specificity (98.2 %) compared to cellular-level analysis.</div></div><div><h3>Conclusion</h3><div>AI algorithms show promise in detecting and subtyping medulloblastomas, but the findings are limited by overreliance on one dataset, small sample sizes, limited study numbers, and lack of meta-analysis Future research should develop larger, more diverse datasets and explore advanced approaches like deep learning and foundation models. Techniques (e.g., model ensembling and multimodal data integration) are needed for better multiclass classification. Further reviews are needed to assess AI's role in other pediatric brain tumors.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"169 ","pages":"Article 103237"},"PeriodicalIF":6.2,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144864472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Artificial Intelligence in Medicine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1