Shamimeh Ahrari, Timothée Zaragori, Adeline Zinsz, Gabriela Hossu, Julien Oster, Bastien Allard, Laure Al Mansour, Darejan Bessac, Sami Boumedine, Caroline Bund, Nicolas De Leiris, Anthime Flaus, Eric Guedj, Aurélie Kas, Nathalie Keromnes, Kevin Kiraz, Fiene Marie Kuijper, Valentine Maitre, Solène Querellou, Guilhem Stien, Olivier Humbert, Laetitia Imbert, Antoine Verger
{"title":"可解释的机器学习与氨基酸PET成像的临床影响:在侵袭性胶质瘤诊断中的应用","authors":"Shamimeh Ahrari, Timothée Zaragori, Adeline Zinsz, Gabriela Hossu, Julien Oster, Bastien Allard, Laure Al Mansour, Darejan Bessac, Sami Boumedine, Caroline Bund, Nicolas De Leiris, Anthime Flaus, Eric Guedj, Aurélie Kas, Nathalie Keromnes, Kevin Kiraz, Fiene Marie Kuijper, Valentine Maitre, Solène Querellou, Guilhem Stien, Olivier Humbert, Laetitia Imbert, Antoine Verger","doi":"10.1007/s00259-024-07053-6","DOIUrl":null,"url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>Radiomics-based machine learning (ML) models of amino acid positron emission tomography (PET) images have shown efficiency in glioma prediction tasks. However, their clinical impact on physician interpretation remains limited. This study investigated whether an explainable radiomics model modifies nuclear physicians’ assessment of glioma aggressiveness at diagnosis.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>Patients underwent dynamic 6-[<sup>18</sup>F]fluoro-L-DOPA PET acquisition. With a 75%/25% split for training (<i>n</i> = 63) and test sets (<i>n</i> = 22), an ensemble ML model was trained using radiomics features extracted from static/dynamic parametric PET images to classify lesion aggressiveness. Three explainable ML methods—Local Interpretable Model-agnostic Explanations (LIME), Anchor, and SHapley Additive exPlanations (SHAP)—generated patient-specific explanations. Eighteen physicians from eight institutions evaluated the test samples. During the first phase, physicians analyzed the 22 cases exclusively through magnetic resonance and static/dynamic PET images, acquired within a maximum interval of 30 days. In the second phase, the same physicians reevaluated the same cases (<i>n</i> = 22), using all available data, including the radiomics model predictions and explanations.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>Eighty-five patients (54[39–62] years old, 41 women) were selected. In the second phase, physicians demonstrated a significant improvement in diagnostic accuracy compared to the first phase (0.775 [0.750–0.802] vs. 0.717 [0.694–0.737], <i>p</i> = 0.007). The explainable radiomics model augmented physician agreement, with a 22.72% increase in Fleiss’s kappa, and significantly enhanced physician confidence (<i>p</i> < 0.001). Among all physicians, Anchor and SHAP showed efficacy in 75% and 72% of cases, respectively, outperforming LIME (<i>p</i> ≤ 0.001).</p><h3 data-test=\"abstract-sub-heading\">Conclusions</h3><p>Our results highlight the potential of an explainable radiomics model using amino acid PET scans as a diagnostic support to assist physicians in identifying glioma aggressiveness.</p>","PeriodicalId":11909,"journal":{"name":"European Journal of Nuclear Medicine and Molecular Imaging","volume":"42 1","pages":""},"PeriodicalIF":8.6000,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Clinical impact of an explainable machine learning with amino acid PET imaging: application to the diagnosis of aggressive glioma\",\"authors\":\"Shamimeh Ahrari, Timothée Zaragori, Adeline Zinsz, Gabriela Hossu, Julien Oster, Bastien Allard, Laure Al Mansour, Darejan Bessac, Sami Boumedine, Caroline Bund, Nicolas De Leiris, Anthime Flaus, Eric Guedj, Aurélie Kas, Nathalie Keromnes, Kevin Kiraz, Fiene Marie Kuijper, Valentine Maitre, Solène Querellou, Guilhem Stien, Olivier Humbert, Laetitia Imbert, Antoine Verger\",\"doi\":\"10.1007/s00259-024-07053-6\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<h3 data-test=\\\"abstract-sub-heading\\\">Purpose</h3><p>Radiomics-based machine learning (ML) models of amino acid positron emission tomography (PET) images have shown efficiency in glioma prediction tasks. However, their clinical impact on physician interpretation remains limited. This study investigated whether an explainable radiomics model modifies nuclear physicians’ assessment of glioma aggressiveness at diagnosis.</p><h3 data-test=\\\"abstract-sub-heading\\\">Methods</h3><p>Patients underwent dynamic 6-[<sup>18</sup>F]fluoro-L-DOPA PET acquisition. With a 75%/25% split for training (<i>n</i> = 63) and test sets (<i>n</i> = 22), an ensemble ML model was trained using radiomics features extracted from static/dynamic parametric PET images to classify lesion aggressiveness. Three explainable ML methods—Local Interpretable Model-agnostic Explanations (LIME), Anchor, and SHapley Additive exPlanations (SHAP)—generated patient-specific explanations. Eighteen physicians from eight institutions evaluated the test samples. During the first phase, physicians analyzed the 22 cases exclusively through magnetic resonance and static/dynamic PET images, acquired within a maximum interval of 30 days. In the second phase, the same physicians reevaluated the same cases (<i>n</i> = 22), using all available data, including the radiomics model predictions and explanations.</p><h3 data-test=\\\"abstract-sub-heading\\\">Results</h3><p>Eighty-five patients (54[39–62] years old, 41 women) were selected. In the second phase, physicians demonstrated a significant improvement in diagnostic accuracy compared to the first phase (0.775 [0.750–0.802] vs. 0.717 [0.694–0.737], <i>p</i> = 0.007). The explainable radiomics model augmented physician agreement, with a 22.72% increase in Fleiss’s kappa, and significantly enhanced physician confidence (<i>p</i> < 0.001). Among all physicians, Anchor and SHAP showed efficacy in 75% and 72% of cases, respectively, outperforming LIME (<i>p</i> ≤ 0.001).</p><h3 data-test=\\\"abstract-sub-heading\\\">Conclusions</h3><p>Our results highlight the potential of an explainable radiomics model using amino acid PET scans as a diagnostic support to assist physicians in identifying glioma aggressiveness.</p>\",\"PeriodicalId\":11909,\"journal\":{\"name\":\"European Journal of Nuclear Medicine and Molecular Imaging\",\"volume\":\"42 1\",\"pages\":\"\"},\"PeriodicalIF\":8.6000,\"publicationDate\":\"2025-01-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"European Journal of Nuclear Medicine and Molecular Imaging\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1007/s00259-024-07053-6\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Journal of Nuclear Medicine and Molecular Imaging","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s00259-024-07053-6","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
Clinical impact of an explainable machine learning with amino acid PET imaging: application to the diagnosis of aggressive glioma
Purpose
Radiomics-based machine learning (ML) models of amino acid positron emission tomography (PET) images have shown efficiency in glioma prediction tasks. However, their clinical impact on physician interpretation remains limited. This study investigated whether an explainable radiomics model modifies nuclear physicians’ assessment of glioma aggressiveness at diagnosis.
Methods
Patients underwent dynamic 6-[18F]fluoro-L-DOPA PET acquisition. With a 75%/25% split for training (n = 63) and test sets (n = 22), an ensemble ML model was trained using radiomics features extracted from static/dynamic parametric PET images to classify lesion aggressiveness. Three explainable ML methods—Local Interpretable Model-agnostic Explanations (LIME), Anchor, and SHapley Additive exPlanations (SHAP)—generated patient-specific explanations. Eighteen physicians from eight institutions evaluated the test samples. During the first phase, physicians analyzed the 22 cases exclusively through magnetic resonance and static/dynamic PET images, acquired within a maximum interval of 30 days. In the second phase, the same physicians reevaluated the same cases (n = 22), using all available data, including the radiomics model predictions and explanations.
Results
Eighty-five patients (54[39–62] years old, 41 women) were selected. In the second phase, physicians demonstrated a significant improvement in diagnostic accuracy compared to the first phase (0.775 [0.750–0.802] vs. 0.717 [0.694–0.737], p = 0.007). The explainable radiomics model augmented physician agreement, with a 22.72% increase in Fleiss’s kappa, and significantly enhanced physician confidence (p < 0.001). Among all physicians, Anchor and SHAP showed efficacy in 75% and 72% of cases, respectively, outperforming LIME (p ≤ 0.001).
Conclusions
Our results highlight the potential of an explainable radiomics model using amino acid PET scans as a diagnostic support to assist physicians in identifying glioma aggressiveness.
期刊介绍:
The European Journal of Nuclear Medicine and Molecular Imaging serves as a platform for the exchange of clinical and scientific information within nuclear medicine and related professions. It welcomes international submissions from professionals involved in the functional, metabolic, and molecular investigation of diseases. The journal's coverage spans physics, dosimetry, radiation biology, radiochemistry, and pharmacy, providing high-quality peer review by experts in the field. Known for highly cited and downloaded articles, it ensures global visibility for research work and is part of the EJNMMI journal family.