Andrew Janowczyk, Scott Doyle, Hannah Gilmore, Anant Madabhushi
{"title":"A resolution adaptive deep hierarchical (RADHicaL) learning scheme applied to nuclear segmentation of digital pathology images.","authors":"Andrew Janowczyk, Scott Doyle, Hannah Gilmore, Anant Madabhushi","doi":"10.1080/21681163.2016.1141063","DOIUrl":null,"url":null,"abstract":"<p><p>Deep learning (DL) has recently been successfully applied to a number of image analysis problems. However, DL approaches tend to be inefficient for segmentation on large image data, such as high-resolution digital pathology slide images. For example, typical breast biopsy images scanned at 40× magnification contain billions of pixels, of which usually only a small percentage belong to the class of interest. For a typical naïve deep learning scheme, parsing through and interrogating all the image pixels would represent hundreds if not thousands of hours of compute time using high performance computing environments. In this paper, we present a resolution adaptive deep hierarchical (RADHicaL) learning scheme wherein DL networks at lower resolutions are leveraged to determine if higher levels of magnification, and thus computation, are necessary to provide precise results. We evaluate our approach on a nuclear segmentation task with a cohort of 141 ER+ breast cancer images and show we can reduce computation time on average by about 85%. Expert annotations of 12,000 nuclei across these 141 images were employed for quantitative evaluation of RADHicaL. A head-to-head comparison with a naïve DL approach, operating solely at the highest magnification, yielded the following performance metrics: .9407 vs .9854 Detection Rate, .8218 vs .8489 <i>F</i>-score, .8061 vs .8364 true positive rate and .8822 vs 0.8932 positive predictive value. Our performance indices compare favourably with state of the art nuclear segmentation approaches for digital pathology images.</p>","PeriodicalId":51800,"journal":{"name":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","volume":null,"pages":null},"PeriodicalIF":1.3000,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5935259/pdf/nihms801416.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Methods in Biomechanics and Biomedical Engineering-Imaging and Visualization","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/21681163.2016.1141063","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2016/4/28 0:00:00","PubModel":"Epub","JCR":"Q4","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0
Abstract
Deep learning (DL) has recently been successfully applied to a number of image analysis problems. However, DL approaches tend to be inefficient for segmentation on large image data, such as high-resolution digital pathology slide images. For example, typical breast biopsy images scanned at 40× magnification contain billions of pixels, of which usually only a small percentage belong to the class of interest. For a typical naïve deep learning scheme, parsing through and interrogating all the image pixels would represent hundreds if not thousands of hours of compute time using high performance computing environments. In this paper, we present a resolution adaptive deep hierarchical (RADHicaL) learning scheme wherein DL networks at lower resolutions are leveraged to determine if higher levels of magnification, and thus computation, are necessary to provide precise results. We evaluate our approach on a nuclear segmentation task with a cohort of 141 ER+ breast cancer images and show we can reduce computation time on average by about 85%. Expert annotations of 12,000 nuclei across these 141 images were employed for quantitative evaluation of RADHicaL. A head-to-head comparison with a naïve DL approach, operating solely at the highest magnification, yielded the following performance metrics: .9407 vs .9854 Detection Rate, .8218 vs .8489 F-score, .8061 vs .8364 true positive rate and .8822 vs 0.8932 positive predictive value. Our performance indices compare favourably with state of the art nuclear segmentation approaches for digital pathology images.
期刊介绍:
Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization is an international journal whose main goals are to promote solutions of excellence for both imaging and visualization of biomedical data, and establish links among researchers, clinicians, the medical technology sector and end-users. The journal provides a comprehensive forum for discussion of the current state-of-the-art in the scientific fields related to imaging and visualization, including, but not limited to: Applications of Imaging and Visualization Computational Bio- imaging and Visualization Computer Aided Diagnosis, Surgery, Therapy and Treatment Data Processing and Analysis Devices for Imaging and Visualization Grid and High Performance Computing for Imaging and Visualization Human Perception in Imaging and Visualization Image Processing and Analysis Image-based Geometric Modelling Imaging and Visualization in Biomechanics Imaging and Visualization in Biomedical Engineering Medical Clinics Medical Imaging and Visualization Multi-modal Imaging and Visualization Multiscale Imaging and Visualization Scientific Visualization Software Development for Imaging and Visualization Telemedicine Systems and Applications Virtual Reality Visual Data Mining and Knowledge Discovery.