Ricardo Gonzalez , Peyman Nejat , Ashirbani Saha , Clinton J.V. Campbell , Andrew P. Norgan , Cynthia Lokker
{"title":"基于组织病理学图像的外部验证机器学习模型在女性乳腺癌诊断、分类、预后或治疗效果预测方面的表现:系统综述","authors":"Ricardo Gonzalez , Peyman Nejat , Ashirbani Saha , Clinton J.V. Campbell , Andrew P. Norgan , Cynthia Lokker","doi":"10.1016/j.jpi.2023.100348","DOIUrl":null,"url":null,"abstract":"<div><p>Numerous machine learning (ML) models have been developed for breast cancer using various types of data. Successful external validation (EV) of ML models is important evidence of their generalizability. The aim of this systematic review was to assess the performance of externally validated ML models based on histopathology images for diagnosis, classification, prognosis, or treatment outcome prediction in female breast cancer. A systematic search of MEDLINE, EMBASE, CINAHL, IEEE, MICCAI, and SPIE conferences was performed for studies published between January 2010 and February 2022. The Prediction Model Risk of Bias Assessment Tool (PROBAST) was employed, and the results were narratively described. Of the 2011 non-duplicated citations, 8 journal articles and 2 conference proceedings met inclusion criteria. Three studies externally validated ML models for diagnosis, 4 for classification, 2 for prognosis, and 1 for both classification and prognosis. Most studies used Convolutional Neural Networks and one used logistic regression algorithms. For diagnostic/classification models, the most common performance metrics reported in the EV were accuracy and area under the curve, which were greater than 87% and 90%, respectively, using pathologists' annotations/diagnoses as ground truth. The hazard ratios in the EV of prognostic ML models were between 1.7 (95% CI, 1.2–2.6) and 1.8 (95% CI, 1.3–2.7) to predict distant disease-free survival; 1.91 (95% CI, 1.11–3.29) for recurrence, and between 0.09 (95% CI, 0.01–0.70) and 0.65 (95% CI, 0.43–0.98) for overall survival, using clinical data as ground truth. Despite EV being an important step before the clinical application of a ML model, it hasn't been performed routinely. The large variability in the training/validation datasets, methods, performance metrics, and reported information limited the comparison of the models and the analysis of their results. Increasing the availability of validation datasets and implementing standardized methods and reporting protocols may facilitate future analyses.</p></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":"15 ","pages":"Article 100348"},"PeriodicalIF":0.0000,"publicationDate":"2023-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2153353923001621/pdfft?md5=065ed1f82e40c99cefb1eb56d5945b9c&pid=1-s2.0-S2153353923001621-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Performance of externally validated machine learning models based on histopathology images for the diagnosis, classification, prognosis, or treatment outcome prediction in female breast cancer: A systematic review\",\"authors\":\"Ricardo Gonzalez , Peyman Nejat , Ashirbani Saha , Clinton J.V. Campbell , Andrew P. Norgan , Cynthia Lokker\",\"doi\":\"10.1016/j.jpi.2023.100348\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Numerous machine learning (ML) models have been developed for breast cancer using various types of data. Successful external validation (EV) of ML models is important evidence of their generalizability. The aim of this systematic review was to assess the performance of externally validated ML models based on histopathology images for diagnosis, classification, prognosis, or treatment outcome prediction in female breast cancer. A systematic search of MEDLINE, EMBASE, CINAHL, IEEE, MICCAI, and SPIE conferences was performed for studies published between January 2010 and February 2022. The Prediction Model Risk of Bias Assessment Tool (PROBAST) was employed, and the results were narratively described. Of the 2011 non-duplicated citations, 8 journal articles and 2 conference proceedings met inclusion criteria. Three studies externally validated ML models for diagnosis, 4 for classification, 2 for prognosis, and 1 for both classification and prognosis. Most studies used Convolutional Neural Networks and one used logistic regression algorithms. For diagnostic/classification models, the most common performance metrics reported in the EV were accuracy and area under the curve, which were greater than 87% and 90%, respectively, using pathologists' annotations/diagnoses as ground truth. The hazard ratios in the EV of prognostic ML models were between 1.7 (95% CI, 1.2–2.6) and 1.8 (95% CI, 1.3–2.7) to predict distant disease-free survival; 1.91 (95% CI, 1.11–3.29) for recurrence, and between 0.09 (95% CI, 0.01–0.70) and 0.65 (95% CI, 0.43–0.98) for overall survival, using clinical data as ground truth. Despite EV being an important step before the clinical application of a ML model, it hasn't been performed routinely. The large variability in the training/validation datasets, methods, performance metrics, and reported information limited the comparison of the models and the analysis of their results. Increasing the availability of validation datasets and implementing standardized methods and reporting protocols may facilitate future analyses.</p></div>\",\"PeriodicalId\":37769,\"journal\":{\"name\":\"Journal of Pathology Informatics\",\"volume\":\"15 \",\"pages\":\"Article 100348\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-11-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2153353923001621/pdfft?md5=065ed1f82e40c99cefb1eb56d5945b9c&pid=1-s2.0-S2153353923001621-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Pathology Informatics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2153353923001621\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Medicine\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Pathology Informatics","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2153353923001621","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Medicine","Score":null,"Total":0}
Performance of externally validated machine learning models based on histopathology images for the diagnosis, classification, prognosis, or treatment outcome prediction in female breast cancer: A systematic review
Numerous machine learning (ML) models have been developed for breast cancer using various types of data. Successful external validation (EV) of ML models is important evidence of their generalizability. The aim of this systematic review was to assess the performance of externally validated ML models based on histopathology images for diagnosis, classification, prognosis, or treatment outcome prediction in female breast cancer. A systematic search of MEDLINE, EMBASE, CINAHL, IEEE, MICCAI, and SPIE conferences was performed for studies published between January 2010 and February 2022. The Prediction Model Risk of Bias Assessment Tool (PROBAST) was employed, and the results were narratively described. Of the 2011 non-duplicated citations, 8 journal articles and 2 conference proceedings met inclusion criteria. Three studies externally validated ML models for diagnosis, 4 for classification, 2 for prognosis, and 1 for both classification and prognosis. Most studies used Convolutional Neural Networks and one used logistic regression algorithms. For diagnostic/classification models, the most common performance metrics reported in the EV were accuracy and area under the curve, which were greater than 87% and 90%, respectively, using pathologists' annotations/diagnoses as ground truth. The hazard ratios in the EV of prognostic ML models were between 1.7 (95% CI, 1.2–2.6) and 1.8 (95% CI, 1.3–2.7) to predict distant disease-free survival; 1.91 (95% CI, 1.11–3.29) for recurrence, and between 0.09 (95% CI, 0.01–0.70) and 0.65 (95% CI, 0.43–0.98) for overall survival, using clinical data as ground truth. Despite EV being an important step before the clinical application of a ML model, it hasn't been performed routinely. The large variability in the training/validation datasets, methods, performance metrics, and reported information limited the comparison of the models and the analysis of their results. Increasing the availability of validation datasets and implementing standardized methods and reporting protocols may facilitate future analyses.
期刊介绍:
The Journal of Pathology Informatics (JPI) is an open access peer-reviewed journal dedicated to the advancement of pathology informatics. This is the official journal of the Association for Pathology Informatics (API). The journal aims to publish broadly about pathology informatics and freely disseminate all articles worldwide. This journal is of interest to pathologists, informaticians, academics, researchers, health IT specialists, information officers, IT staff, vendors, and anyone with an interest in informatics. We encourage submissions from anyone with an interest in the field of pathology informatics. We publish all types of papers related to pathology informatics including original research articles, technical notes, reviews, viewpoints, commentaries, editorials, symposia, meeting abstracts, book reviews, and correspondence to the editors. All submissions are subject to rigorous peer review by the well-regarded editorial board and by expert referees in appropriate specialties.