Sajib Saha, Janardhan Vignarajan, Adam Flesch, Patrik Jelinko, Petra Gorog, Eniko Szep, Csaba Toth, Peter Gombas, Tibor Schvarcz, Orsolya Mihaly, Marianna Kapin, Alexandra Zub, Levente Kuthi, Laszlo Tiszlavicz, Tibor Glasz, Shaun Frost
{"title":"An Artificial Intelligent System for Prostate Cancer Diagnosis in Whole Slide Images.","authors":"Sajib Saha, Janardhan Vignarajan, Adam Flesch, Patrik Jelinko, Petra Gorog, Eniko Szep, Csaba Toth, Peter Gombas, Tibor Schvarcz, Orsolya Mihaly, Marianna Kapin, Alexandra Zub, Levente Kuthi, Laszlo Tiszlavicz, Tibor Glasz, Shaun Frost","doi":"10.1007/s10916-024-02118-3","DOIUrl":null,"url":null,"abstract":"<p><p>In recent years a significant demand to develop computer-assisted diagnostic tools to assess prostate cancer using whole slide images has been observed. In this study we develop and validate a machine learning system for cancer assessment, inclusive of detection of perineural invasion and measurement of cancer portion to meet clinical reporting needs. The system analyses the whole slide image in three consecutive stages: tissue detection, classification, and slide level analysis. The whole slide image is divided into smaller regions (patches). The tissue detection stage relies upon traditional machine learning to identify WSI patches containing tissue, which are then further assessed at the classification stage where deep learning algorithms are employed to detect and classify cancer tissue. At the slide level analysis stage, entire slide level information is generated by aggregating all the patch level information of the slide. A total of 2340 haematoxylin and eosin stained slides were used to train and validate the system. A medical team consisting of 11 board certified pathologists with prostatic pathology subspeciality competences working independently in 4 different medical centres performed the annotations. Pixel-level annotation based on an agreed set of 10 annotation terms, determined based on medical relevance and prevalence, was created by the team. The system achieved an accuracy of 99.53% in tissue detection, with sensitivity and specificity respectively of 99.78% and 99.12%. The system achieved an accuracy of 92.80% in classifying tissue terms, with sensitivity and specificity respectively 92.61% and 99.25%, when 5x magnification level was used. For 10x magnification, these values were respectively 91.04%, 90.49%, and 99.07%. For 20x magnification they were 84.71%, 83.95%, 90.13%.</p>","PeriodicalId":16338,"journal":{"name":"Journal of Medical Systems","volume":"48 1","pages":"101"},"PeriodicalIF":3.5000,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11519157/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Systems","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s10916-024-02118-3","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0
Abstract
In recent years a significant demand to develop computer-assisted diagnostic tools to assess prostate cancer using whole slide images has been observed. In this study we develop and validate a machine learning system for cancer assessment, inclusive of detection of perineural invasion and measurement of cancer portion to meet clinical reporting needs. The system analyses the whole slide image in three consecutive stages: tissue detection, classification, and slide level analysis. The whole slide image is divided into smaller regions (patches). The tissue detection stage relies upon traditional machine learning to identify WSI patches containing tissue, which are then further assessed at the classification stage where deep learning algorithms are employed to detect and classify cancer tissue. At the slide level analysis stage, entire slide level information is generated by aggregating all the patch level information of the slide. A total of 2340 haematoxylin and eosin stained slides were used to train and validate the system. A medical team consisting of 11 board certified pathologists with prostatic pathology subspeciality competences working independently in 4 different medical centres performed the annotations. Pixel-level annotation based on an agreed set of 10 annotation terms, determined based on medical relevance and prevalence, was created by the team. The system achieved an accuracy of 99.53% in tissue detection, with sensitivity and specificity respectively of 99.78% and 99.12%. The system achieved an accuracy of 92.80% in classifying tissue terms, with sensitivity and specificity respectively 92.61% and 99.25%, when 5x magnification level was used. For 10x magnification, these values were respectively 91.04%, 90.49%, and 99.07%. For 20x magnification they were 84.71%, 83.95%, 90.13%.
期刊介绍:
Journal of Medical Systems provides a forum for the presentation and discussion of the increasingly extensive applications of new systems techniques and methods in hospital clinic and physician''s office administration; pathology radiology and pharmaceutical delivery systems; medical records storage and retrieval; and ancillary patient-support systems. The journal publishes informative articles essays and studies across the entire scale of medical systems from large hospital programs to novel small-scale medical services. Education is an integral part of this amalgamation of sciences and selected articles are published in this area. Since existing medical systems are constantly being modified to fit particular circumstances and to solve specific problems the journal includes a special section devoted to status reports on current installations.