Pub Date : 2025-08-28DOI: 10.1016/j.artmed.2025.103245
Rajesh Kumar , Shaoning Zeng , Jay Kumar , Zakria , Xinfeng Mao
Positron Emission Tomography-Computed Tomography (PET–CT) evolution is critical for liver lesion diagnosis. However, data scarcity, privacy concerns, and cross-institutional imaging heterogeneity impede accurate deep learning model deployment. We propose a Federated Transfer Learning (FTL) framework that integrates federated learning’s privacy-preserving collaboration with transfer learning’s pre-trained model adaptation, enhancing liver lesion segmentation in PET–CT imaging. By leveraging a Feature Co-learning Block (FCB) and privacy-enhancing technologies (DP, HE), our approach ensures robust segmentation without sharing sensitive patient data. (1) A privacy-preserving FTL framework combining federated learning and adaptive transfer learning; (2) A multi-modal FCB for improved PET–CT feature integration; (3) Extensive evaluation across diverse institutions with privacy-enhancing technologies like Differential Privacy (DP) and Homomorphic Encryption (HE). Experiments on simulated multi-institutional PET–CT datasets demonstrate superior performance compared to baselines, with robust privacy guarantees. The FTL framework reduces data requirements and enhances generalizability, advancing liver lesion diagnostics.
{"title":"Privacy-preserving federated transfer learning for enhanced liver lesion segmentation in PET–CT imaging","authors":"Rajesh Kumar , Shaoning Zeng , Jay Kumar , Zakria , Xinfeng Mao","doi":"10.1016/j.artmed.2025.103245","DOIUrl":"10.1016/j.artmed.2025.103245","url":null,"abstract":"<div><div>Positron Emission Tomography-Computed Tomography (PET–CT) evolution is critical for liver lesion diagnosis. However, data scarcity, privacy concerns, and cross-institutional imaging heterogeneity impede accurate deep learning model deployment. We propose a Federated Transfer Learning (FTL) framework that integrates federated learning’s privacy-preserving collaboration with transfer learning’s pre-trained model adaptation, enhancing liver lesion segmentation in PET–CT imaging. By leveraging a Feature Co-learning Block (FCB) and privacy-enhancing technologies (DP, HE), our approach ensures robust segmentation without sharing sensitive patient data. (1) A privacy-preserving FTL framework combining federated learning and adaptive transfer learning; (2) A multi-modal FCB for improved PET–CT feature integration; (3) Extensive evaluation across diverse institutions with privacy-enhancing technologies like Differential Privacy (DP) and Homomorphic Encryption (HE). Experiments on simulated multi-institutional PET–CT datasets demonstrate superior performance compared to baselines, with robust privacy guarantees. The FTL framework reduces data requirements and enhances generalizability, advancing liver lesion diagnostics.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"169 ","pages":"Article 103245"},"PeriodicalIF":6.2,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144922204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-26DOI: 10.1016/j.artmed.2025.103251
Miriam Cobo , David Corral Fontecha , Wilson Silva , Lara Lloret Iglesias
Artificial intelligence in medical imaging has grown rapidly in the past decade, driven by advances in deep learning and widespread access to computing resources. Applications cover diverse imaging modalities, including those based on electromagnetic radiation (e.g., X-rays), subatomic particles (e.g., nuclear imaging), and acoustic waves (ultrasound). Each modality features and limitations are defined by its underlying physics. However, many artificial intelligence practitioners lack a solid understanding of the physical principles involved in medical image acquisition. This gap hinders leveraging the full potential of deep learning, as incorporating physics knowledge into artificial intelligence systems promotes trustworthiness, especially in limited data scenarios. This work reviews the fundamental physical concepts behind medical imaging and examines their influence on recent developments in artificial intelligence, particularly, generative models and reconstruction algorithms. Finally, we describe physics-informed machine learning approaches to improve feature learning in medical imaging.
{"title":"Physical foundations for trustworthy medical imaging: A survey for artificial intelligence researchers","authors":"Miriam Cobo , David Corral Fontecha , Wilson Silva , Lara Lloret Iglesias","doi":"10.1016/j.artmed.2025.103251","DOIUrl":"10.1016/j.artmed.2025.103251","url":null,"abstract":"<div><div>Artificial intelligence in medical imaging has grown rapidly in the past decade, driven by advances in deep learning and widespread access to computing resources. Applications cover diverse imaging modalities, including those based on electromagnetic radiation (e.g., X-rays), subatomic particles (e.g., nuclear imaging), and acoustic waves (ultrasound). Each modality features and limitations are defined by its underlying physics. However, many artificial intelligence practitioners lack a solid understanding of the physical principles involved in medical image acquisition. This gap hinders leveraging the full potential of deep learning, as incorporating physics knowledge into artificial intelligence systems promotes trustworthiness, especially in limited data scenarios. This work reviews the fundamental physical concepts behind medical imaging and examines their influence on recent developments in artificial intelligence, particularly, generative models and reconstruction algorithms. Finally, we describe physics-informed machine learning approaches to improve feature learning in medical imaging.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"169 ","pages":"Article 103251"},"PeriodicalIF":6.2,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144917649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-23DOI: 10.1016/j.artmed.2025.103247
Tao Zhong , Yang Ning , Xueyang Wu , Li Ye , Chichi Li , Yu Zhang , Yu Du
Accurate instance segmentation of tooth and pulp from cone-beam computed tomography (CBCT) images is essential but highly challenging due to the pulp’s small structures and indistinct boundaries. To address these critical challenges, we propose TIPs designed for Tooth Instance and Pulp segmentation. TIPs initially employs a backbone model to segment a binary mask of the tooth from CBCT images, which is then utilized to derive position prior of the tooth and shape prior of the pulp. Subsequently, we propose the Hierarchical Fusion Mamba models to leverage the strengths of both anatomical priors and CBCT images by extracting and integrating shallow and deep features from Convolution Neural Networks (CNNs) and State Space Sequence Models (SSMs), respectively. This process achieves tooth instance and pulp segmentation, which are then combined to obtain the final pulp instance segmentation. Extensive experiments on CBCT scans from 147 patients demonstrate that TIPs significantly outperforms state-of-the-art methods in terms of segmentation accuracy. Furthermore, we have encapsulated this framework into an openly accessible tool for one-click using. To our knowledge, this is the first toolbox capable of segmentation of tooth and pulp instances, with its performance validated on two external datasets comprising 59 samples from the Toothfairy2 dataset and 48 samples from the STS dataset. These results demonstrate the potential of TIPs as a practical tool to boost clinical workflows in digital dentistry, enhancing the precision and efficiency of dental diagnostics and treatment planning.
{"title":"TIPs: Tooth instance and pulp segmentation based on hierarchical extraction and fusion of anatomical priors from cone-beam CT","authors":"Tao Zhong , Yang Ning , Xueyang Wu , Li Ye , Chichi Li , Yu Zhang , Yu Du","doi":"10.1016/j.artmed.2025.103247","DOIUrl":"10.1016/j.artmed.2025.103247","url":null,"abstract":"<div><div>Accurate instance segmentation of tooth and pulp from cone-beam computed tomography (CBCT) images is essential but highly challenging due to the pulp’s small structures and indistinct boundaries. To address these critical challenges, we propose TIPs designed for <u>T</u>ooth <u>I</u>nstance and <u>P</u>ulp <u>s</u>egmentation. TIPs initially employs a backbone model to segment a binary mask of the tooth from CBCT images, which is then utilized to derive position prior of the tooth and shape prior of the pulp. Subsequently, we propose the Hierarchical Fusion Mamba models to leverage the strengths of both anatomical priors and CBCT images by extracting and integrating shallow and deep features from Convolution Neural Networks (CNNs) and State Space Sequence Models (SSMs), respectively. This process achieves tooth instance and pulp segmentation, which are then combined to obtain the final pulp instance segmentation. Extensive experiments on CBCT scans from 147 patients demonstrate that TIPs significantly outperforms state-of-the-art methods in terms of segmentation accuracy. Furthermore, we have encapsulated this framework into an openly accessible tool for one-click using. To our knowledge, this is the first toolbox capable of segmentation of tooth and pulp instances, with its performance validated on two external datasets comprising 59 samples from the Toothfairy2 dataset and 48 samples from the STS dataset. These results demonstrate the potential of TIPs as a practical tool to boost clinical workflows in digital dentistry, enhancing the precision and efficiency of dental diagnostics and treatment planning.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"169 ","pages":"Article 103247"},"PeriodicalIF":6.2,"publicationDate":"2025-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144917650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-22DOI: 10.1016/j.artmed.2025.103239
Dawei Fan , Zhuo Chen , Yifan Gao , Jiaming Yu , Kaibin Li , Yi Wei , Yanping Chen , Riqing Chen , Lifang Wei
In digital pathology, nuclei segmentation is a critical task for pathological image analysis, holding significant importance for diagnosis and research. However, challenges such as blurred boundaries between nuclei and background regions, domain shifts between pathological images, and uneven distribution of nuclei pose significant obstacles to segmentation tasks. To address these issues, we propose an innovative Causal inference inspired Diversified aggregation convolution Network named CDNet, which integrates a Diversified Aggregation Convolution (DAC), a Causal Inference Module (CIM) based on causal discovery principles, and a comprehensive loss function. DAC improves the issue of unclear boundaries between nuclei and background regions, and CIM enhances the model’s cross-domain generalization ability. A novel Stable-Weighted Combined loss function was designed that combined the chunk-computed Dice Loss with the Focal Loss and the Causal Inference Loss to address the issue of uneven nuclei distribution. Experimental evaluations on the MoNuSeg, GLySAC, and MoNuSAC datasets demonstrate that CDNet significantly outperforms other models and exhibits strong generalization capabilities. Specifically, CDNet outperforms the second-best model by 0.79% (mIoU) and 1.32% (DSC) on the MoNuSeg dataset, by 2.65% (mIoU) and 2.13% (DSC) on the GLySAC dataset, and by 1.54% (mIoU) and 1.10% (DSC) on the MoNuSAC dataset. Code is publicly available at https://github.com/7FFDW/CDNet.
{"title":"Multiplex aggregation combining sample reweight composite network for pathology image segmentation","authors":"Dawei Fan , Zhuo Chen , Yifan Gao , Jiaming Yu , Kaibin Li , Yi Wei , Yanping Chen , Riqing Chen , Lifang Wei","doi":"10.1016/j.artmed.2025.103239","DOIUrl":"10.1016/j.artmed.2025.103239","url":null,"abstract":"<div><div>In digital pathology, nuclei segmentation is a critical task for pathological image analysis, holding significant importance for diagnosis and research. However, challenges such as blurred boundaries between nuclei and background regions, domain shifts between pathological images, and uneven distribution of nuclei pose significant obstacles to segmentation tasks. To address these issues, we propose an innovative Causal inference inspired Diversified aggregation convolution Network named CDNet, which integrates a Diversified Aggregation Convolution (DAC), a Causal Inference Module (CIM) based on causal discovery principles, and a comprehensive loss function. DAC improves the issue of unclear boundaries between nuclei and background regions, and CIM enhances the model’s cross-domain generalization ability. A novel Stable-Weighted Combined loss function was designed that combined the chunk-computed Dice Loss with the Focal Loss and the Causal Inference Loss to address the issue of uneven nuclei distribution. Experimental evaluations on the MoNuSeg, GLySAC, and MoNuSAC datasets demonstrate that CDNet significantly outperforms other models and exhibits strong generalization capabilities. Specifically, CDNet outperforms the second-best model by 0.79% (mIoU) and 1.32% (DSC) on the MoNuSeg dataset, by 2.65% (mIoU) and 2.13% (DSC) on the GLySAC dataset, and by 1.54% (mIoU) and 1.10% (DSC) on the MoNuSAC dataset. Code is publicly available at <span><span>https://github.com/7FFDW/CDNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"169 ","pages":"Article 103239"},"PeriodicalIF":6.2,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144895439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-22DOI: 10.1016/j.artmed.2025.103252
Nadia Siddiqui , Yazan Bouchi , Ellen Kim , Jonathan D. Hron , John Park , John Kang
This perspective illustrates the need for improved AI education for clinicians, highlighting gaps in current approaches and technical content. It advocates for the creation of AI guides specifically designed for clinicians integrating case-based learning approaches and led by clinical informaticians. We emphasize the importance of modern medical educational strategies, and reflect on relevance and applicability of AI education, to ensure clinicians are prepared for safe, effective, and efficient AI-driven healthcare.
1–2 Sentence description
This position article reflects on the current landscape of AI educational guides for clinicians, identifying gaps in instructional approaches and technical content. We propose the development of case-based AI education modules led by clinical informatics physicians in collaboration with professional societies.
{"title":"Unprepared and overwhelmed: A case for clinician-focused AI education","authors":"Nadia Siddiqui , Yazan Bouchi , Ellen Kim , Jonathan D. Hron , John Park , John Kang","doi":"10.1016/j.artmed.2025.103252","DOIUrl":"10.1016/j.artmed.2025.103252","url":null,"abstract":"<div><div>This perspective illustrates the need for improved AI education for clinicians, highlighting gaps in current approaches and technical content. It advocates for the creation of AI guides specifically designed for clinicians integrating case-based learning approaches and led by clinical informaticians. We emphasize the importance of modern medical educational strategies, and reflect on relevance and applicability of AI education, to ensure clinicians are prepared for safe, effective, and efficient AI-driven healthcare.</div></div><div><h3>1–2 Sentence description</h3><div>This position article reflects on the current landscape of AI educational guides for clinicians, identifying gaps in instructional approaches and technical content. We propose the development of case-based AI education modules led by clinical informatics physicians in collaboration with professional societies.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"169 ","pages":"Article 103252"},"PeriodicalIF":6.2,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144891931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-19DOI: 10.1016/j.artmed.2025.103246
Chang Zong , Jian Wan , Siliang Tang , Lei Zhang
When addressing professional questions in the biomedical domain, humans typically acquire multiple pieces of information as evidence and engage in multifaceted analysis to provide high-quality answers. Current LLM-based question answering methods lack a detailed definition and learning process for evidence analysis, leading to the risk of error propagation and hallucinations while using evidence. Although increasing the parameter size of LLMs can alleviate these issues, it also presents challenges in training and deployment with limited resources. In this study, we propose EvidenceMap, which aims to enable a lightweight pre-trained language model to explicitly learn multiple aspects of biomedical evidence, including supportive evaluation, logical correlation and content summarization, thereby latently guiding a generative model (around 3B parameters) to provide textual responses. Experimental results demonstrate that our method, learning evidence analysis by fine-tuning a model with only 66M parameters, exceeds the RAG method with an 8B LLM by 19.9% and 5.7% in reference-based quality and accuracy, respectively. The code and dataset for reproducing our framework and experiments are available at https://github.com/ZUST-BIT/EvidenceMap.
{"title":"EvidenceMap: Learning evidence analysis to unleash the power of small language models for biomedical question answering","authors":"Chang Zong , Jian Wan , Siliang Tang , Lei Zhang","doi":"10.1016/j.artmed.2025.103246","DOIUrl":"10.1016/j.artmed.2025.103246","url":null,"abstract":"<div><div>When addressing professional questions in the biomedical domain, humans typically acquire multiple pieces of information as evidence and engage in multifaceted analysis to provide high-quality answers. Current LLM-based question answering methods lack a detailed definition and learning process for evidence analysis, leading to the risk of error propagation and hallucinations while using evidence. Although increasing the parameter size of LLMs can alleviate these issues, it also presents challenges in training and deployment with limited resources. In this study, we propose <strong><span>EvidenceMap</span></strong>, which aims to enable a lightweight pre-trained language model to explicitly learn multiple aspects of biomedical evidence, including supportive evaluation, logical correlation and content summarization, thereby latently guiding a generative model (around 3B parameters) to provide textual responses. Experimental results demonstrate that our method, learning evidence analysis by fine-tuning a model with only 66M parameters, exceeds the RAG method with an 8B LLM by 19.9% and 5.7% in reference-based quality and accuracy, respectively. The code and dataset for reproducing our framework and experiments are available at <span><span>https://github.com/ZUST-BIT/EvidenceMap</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"169 ","pages":"Article 103246"},"PeriodicalIF":6.2,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144903105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-18DOI: 10.1016/j.artmed.2025.103240
Yuan Yang , Xu Yu , Wei Yu , Shengxian Tu , Su Zhang , Wei Yang
The lumen and external elastic lamina contour delineation is crucial for quantitative analyses of intravascular ultrasound (IVUS) images. However, the various artifacts in IVUS images pose substantial challenges for accurate delineation. Existing mask-based methods often produce anatomically implausible contours in artifact-affected images, while contour-based methods suffer from the over-smooth problem within the artifact regions. In this paper, we directly regress the contour pairs instead of mask-based segmentation. A coupled contour representation is adopted to learn a low-dimensional contour signature space, where the embedded anatomical prior enables the model to avoid producing unreasonable results. Further, a PIoU loss is proposed to capture the overall shape of the contour points and maximize the similarity between the regressed contours and manually delineated contours with various irregular shapes, alleviating the over-smooth problem. For the images with severe artifacts, a difficulty-aware training strategy is designed for contour regression, which gradually guides the model focus on hard samples and improves contour localization accuracy. We evaluate the proposed framework on a large IVUS dataset, consisting of 7204 frames from 185 pullbacks. The mean Dice similarity coefficients of the method for the lumen and external elastic lamina are 0.951 and 0.967, which significantly outperforms other state-of-the-art (SOTA) models. All regressed contours in the test images are anatomically plausible. On the public IVUS-2011 dataset, the proposed method attains comparable performance to the SOTA models with the highest processing speed at 100 fps. The code is available at https://github.com/SMU-MedicalVision/ContourRegression.
{"title":"Difficulty-aware coupled contour regression network with IoU loss for efficient IVUS delineation","authors":"Yuan Yang , Xu Yu , Wei Yu , Shengxian Tu , Su Zhang , Wei Yang","doi":"10.1016/j.artmed.2025.103240","DOIUrl":"10.1016/j.artmed.2025.103240","url":null,"abstract":"<div><div>The lumen and external elastic lamina contour delineation is crucial for quantitative analyses of intravascular ultrasound (IVUS) images. However, the various artifacts in IVUS images pose substantial challenges for accurate delineation. Existing mask-based methods often produce anatomically implausible contours in artifact-affected images, while contour-based methods suffer from the over-smooth problem within the artifact regions. In this paper, we directly regress the contour pairs instead of mask-based segmentation. A coupled contour representation is adopted to learn a low-dimensional contour signature space, where the embedded anatomical prior enables the model to avoid producing unreasonable results. Further, a PIoU loss is proposed to capture the overall shape of the contour points and maximize the similarity between the regressed contours and manually delineated contours with various irregular shapes, alleviating the over-smooth problem. For the images with severe artifacts, a difficulty-aware training strategy is designed for contour regression, which gradually guides the model focus on hard samples and improves contour localization accuracy. We evaluate the proposed framework on a large IVUS dataset, consisting of 7204 frames from 185 pullbacks. The mean Dice similarity coefficients of the method for the lumen and external elastic lamina are 0.951 and 0.967, which significantly outperforms other state-of-the-art (SOTA) models. All regressed contours in the test images are anatomically plausible. On the public IVUS-2011 dataset, the proposed method attains comparable performance to the SOTA models with the highest processing speed at 100 fps. The code is available at <span><span>https://github.com/SMU-MedicalVision/ContourRegression</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"169 ","pages":"Article 103240"},"PeriodicalIF":6.2,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144864475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-15DOI: 10.1016/j.artmed.2025.103241
Sunghong Park , Dong-gi Lee , Juhyeon Kim , Masaud Shah , Hyunjung Shin , Hyun Goo Woo
Neurodegenerative diseases involve progressive neuronal dysfunction, requiring identification of specific pathological features for accurate diagnosis. Although cerebrospinal fluid analysis and neuroimaging are commonly employed, their invasiveness and high-cost limit widespread clinical use. In contrast, blood-based biomarkers offer a non-invasive, cost-effective, and accessible alternative. Recent advances in plasma proteomics combined with machine learning (ML) have further improved diagnostic accuracy; however, the integration of underlying biological information remains largely overlooked. Notably, many ML-based plasma proteomic profiling approaches overlook protein-protein interactions (PPI) and the hierarchical structure of molecular pathways. To address these limitations, we propose Biologically Informed Graph Propagational Network (BIGPN), a novel ML model for plasma proteomic profiling of neurodegenerative biomarkers. BIGPN employs graph neural network-based architecture to harness a PPI network and propagates independent effects of proteins through the PPI network, capturing higher-order interactions with global awareness of PPIs. BIGPN then applies a multi-level pathway structure to extract biologically meaningful feature representations, ensuring that the model reflects structured biological mechanisms, and it provides clear explainability of the pathway structure in the context of importance through probabilistically represented parameters. Experimental validation on the UK Biobank dataset demonstrated the superior performance of BIGPN in neurodegenerative risk prediction, outperforming comparison methods. Furthermore, the explainability of BIGPN facilitated detailed analyses of the discriminative significance of synergistic effects, the predictive importance of proteins, and the longitudinal changes in biomarker profiles, reinforcing its clinical relevance. Overall, BIGPN's integration of PPIs and pathway structure addresses critical gaps in ML-based plasma proteomic profiling, offering a powerful approach for improved neurodegenerative disease diagnosis.
{"title":"BIGPN: Biologically informed graph propagational network for plasma proteomic profiling of neurodegenerative biomarkers","authors":"Sunghong Park , Dong-gi Lee , Juhyeon Kim , Masaud Shah , Hyunjung Shin , Hyun Goo Woo","doi":"10.1016/j.artmed.2025.103241","DOIUrl":"10.1016/j.artmed.2025.103241","url":null,"abstract":"<div><div>Neurodegenerative diseases involve progressive neuronal dysfunction, requiring identification of specific pathological features for accurate diagnosis. Although cerebrospinal fluid analysis and neuroimaging are commonly employed, their invasiveness and high-cost limit widespread clinical use. In contrast, blood-based biomarkers offer a non-invasive, cost-effective, and accessible alternative. Recent advances in plasma proteomics combined with machine learning (ML) have further improved diagnostic accuracy; however, the integration of underlying biological information remains largely overlooked. Notably, many ML-based plasma proteomic profiling approaches overlook protein-protein interactions (PPI) and the hierarchical structure of molecular pathways. To address these limitations, we propose Biologically Informed Graph Propagational Network (BIGPN), a novel ML model for plasma proteomic profiling of neurodegenerative biomarkers. BIGPN employs graph neural network-based architecture to harness a PPI network and propagates independent effects of proteins through the PPI network, capturing higher-order interactions with global awareness of PPIs. BIGPN then applies a multi-level pathway structure to extract biologically meaningful feature representations, ensuring that the model reflects structured biological mechanisms, and it provides clear explainability of the pathway structure in the context of importance through probabilistically represented parameters. Experimental validation on the UK Biobank dataset demonstrated the superior performance of BIGPN in neurodegenerative risk prediction, outperforming comparison methods. Furthermore, the explainability of BIGPN facilitated detailed analyses of the discriminative significance of synergistic effects, the predictive importance of proteins, and the longitudinal changes in biomarker profiles, reinforcing its clinical relevance. Overall, BIGPN's integration of PPIs and pathway structure addresses critical gaps in ML-based plasma proteomic profiling, offering a powerful approach for improved neurodegenerative disease diagnosis.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"169 ","pages":"Article 103241"},"PeriodicalIF":6.2,"publicationDate":"2025-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144886979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-14DOI: 10.1016/j.artmed.2025.103243
Shiva Toumaj , Arash Heidari , Nima Jafari Navimipour
Timely detection of cancer is essential for enhancing patient outcomes. Artificial Intelligence (AI), especially Deep Learning (DL), demonstrates significant potential in cancer diagnostics; however, its opaque nature presents notable concerns. Explainable AI (XAI) mitigates these issues by improving transparency and interpretability. This study provides a systematic review of recent applications of XAI in cancer detection, categorizing the techniques according to cancer type, including breast, skin, lung, colorectal, brain, and others. It emphasizes interpretability methods, dataset utilization, simulation environments, and security considerations. The results indicate that Convolutional Neural Networks (CNNs) account for 31 % of model usage, SHAP is the predominant interpretability framework at 44.4 %, and Python is the leading programming language at 32.1 %. Only 7.4 % of studies address security issues. This study identifies significant challenges and gaps, guiding future research in trustworthy and interpretable AI within oncology.
{"title":"Leveraging explainable artificial intelligence for transparent and trustworthy cancer detection systems","authors":"Shiva Toumaj , Arash Heidari , Nima Jafari Navimipour","doi":"10.1016/j.artmed.2025.103243","DOIUrl":"10.1016/j.artmed.2025.103243","url":null,"abstract":"<div><div>Timely detection of cancer is essential for enhancing patient outcomes. Artificial Intelligence (AI), especially Deep Learning (DL), demonstrates significant potential in cancer diagnostics; however, its opaque nature presents notable concerns. Explainable AI (XAI) mitigates these issues by improving transparency and interpretability. This study provides a systematic review of recent applications of XAI in cancer detection, categorizing the techniques according to cancer type, including breast, skin, lung, colorectal, brain, and others. It emphasizes interpretability methods, dataset utilization, simulation environments, and security considerations. The results indicate that Convolutional Neural Networks (CNNs) account for 31 % of model usage, SHAP is the predominant interpretability framework at 44.4 %, and Python is the leading programming language at 32.1 %. Only 7.4 % of studies address security issues. This study identifies significant challenges and gaps, guiding future research in trustworthy and interpretable AI within oncology.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"169 ","pages":"Article 103243"},"PeriodicalIF":6.2,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144864473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-14DOI: 10.1016/j.artmed.2025.103237
Hiba Alzoubi , Alaa Abd-alrazaq , Obada Almaabreh , Rawan AlSaad , Sarah Aziz , Rukaya Al-Dafi , Leen Abu Salih , Leen Turani , Sondos Albqowr , Rawan Abu Tarbosh , Batool Abu Alkishik , Rafat Damseh , Arfan Ahmed , Hashem Abu Serhan
Background
Medulloblastoma is the most prevalent malignant brain tumor in children, requiring timely and precise diagnosis to improve clinical outcomes. Artificial Intelligence (AI) offers a promising avenue to enhance diagnostic accuracy and efficiency in this domain.
Objective
This systematic review evaluates the performance of AI models in detecting and subtyping medulloblastomas using histopathological images.
Methods
In this systematic review, we searched seven databases to identify English-language studies assessing AI-based detection or classification of medulloblastomas in patients under 18 years. Two reviewers independently conducted study selection, data extraction, and risk of bias assessment. Results were synthesized narratively.
Results
Of 3341 records, 15 studies met inclusion criteria. AI models demonstrated strong diagnostic performance, with mean accuracy of 91.3 %, sensitivity of 94.2 %, and specificity of 97.4 %. Support Vector Machines achieved the highest accuracy (96.3 %) and specificity (99.4 %), while K-Nearest Neighbors showed the highest sensitivity (97.1 %). Detection tasks (accuracy 96.1 %, sensitivity 98.5 %) outperformed subtyping tasks (accuracy 87.3 %, sensitivity 91.3 %). Models analyzing images at the architectural level yielded higher accuracy (94.7 %), sensitivity (94.1 %), and specificity (98.2 %) compared to cellular-level analysis.
Conclusion
AI algorithms show promise in detecting and subtyping medulloblastomas, but the findings are limited by overreliance on one dataset, small sample sizes, limited study numbers, and lack of meta-analysis Future research should develop larger, more diverse datasets and explore advanced approaches like deep learning and foundation models. Techniques (e.g., model ensembling and multimodal data integration) are needed for better multiclass classification. Further reviews are needed to assess AI's role in other pediatric brain tumors.
{"title":"Diagnostic performance of artificial intelligence in detecting and subtyping pediatric medulloblastoma from histopathological images: A systematic review","authors":"Hiba Alzoubi , Alaa Abd-alrazaq , Obada Almaabreh , Rawan AlSaad , Sarah Aziz , Rukaya Al-Dafi , Leen Abu Salih , Leen Turani , Sondos Albqowr , Rawan Abu Tarbosh , Batool Abu Alkishik , Rafat Damseh , Arfan Ahmed , Hashem Abu Serhan","doi":"10.1016/j.artmed.2025.103237","DOIUrl":"10.1016/j.artmed.2025.103237","url":null,"abstract":"<div><h3>Background</h3><div>Medulloblastoma is the most prevalent malignant brain tumor in children, requiring timely and precise diagnosis to improve clinical outcomes. Artificial Intelligence (AI) offers a promising avenue to enhance diagnostic accuracy and efficiency in this domain.</div></div><div><h3>Objective</h3><div>This systematic review evaluates the performance of AI models in detecting and subtyping medulloblastomas using histopathological images.</div></div><div><h3>Methods</h3><div>In this systematic review, we searched seven databases to identify English-language studies assessing AI-based detection or classification of medulloblastomas in patients under 18 years. Two reviewers independently conducted study selection, data extraction, and risk of bias assessment. Results were synthesized narratively.</div></div><div><h3>Results</h3><div>Of 3341 records, 15 studies met inclusion criteria. AI models demonstrated strong diagnostic performance, with mean accuracy of 91.3 %, sensitivity of 94.2 %, and specificity of 97.4 %. Support Vector Machines achieved the highest accuracy (96.3 %) and specificity (99.4 %), while K-Nearest Neighbors showed the highest sensitivity (97.1 %). Detection tasks (accuracy 96.1 %, sensitivity 98.5 %) outperformed subtyping tasks (accuracy 87.3 %, sensitivity 91.3 %). Models analyzing images at the architectural level yielded higher accuracy (94.7 %), sensitivity (94.1 %), and specificity (98.2 %) compared to cellular-level analysis.</div></div><div><h3>Conclusion</h3><div>AI algorithms show promise in detecting and subtyping medulloblastomas, but the findings are limited by overreliance on one dataset, small sample sizes, limited study numbers, and lack of meta-analysis Future research should develop larger, more diverse datasets and explore advanced approaches like deep learning and foundation models. Techniques (e.g., model ensembling and multimodal data integration) are needed for better multiclass classification. Further reviews are needed to assess AI's role in other pediatric brain tumors.</div></div>","PeriodicalId":55458,"journal":{"name":"Artificial Intelligence in Medicine","volume":"169 ","pages":"Article 103237"},"PeriodicalIF":6.2,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144864472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}