Pub Date : 2023-01-01DOI: 10.1016/j.jpi.2022.100186
Alexander N. Perez , Kayvon F. Sharif , Erica Guelfi , Sophie Li , Alexis Miller , Kavita Prasad , Robert J. Sinard , James S. Lewis Jr , Michael C. Topf
Structured light three-dimensional (3D) scanning is a ubiquitous mainstay of object inspection and quality control in industrial manufacturing, and has recently been integrated into various medical disciplines. Photorealistic 3D scans can readily be acquired from fresh or formalin-fixed tissue and have potential for use within anatomic pathology (AP) in a variety of scenarios, ranging from direct clinical care to documentation and education. Methods for scanning and post-processing of fresh surgical specimens rely on relatively low-cost and technically simple procedures. Here, we demonstrate potential use of 3D scanning in surgical pathology in the form of a mixed media pathology report with a novel post-scan virtual inking and marking technique to precisely demarcate areas of tissue sectioning and details of final tumor and margin status. We display a sample mixed-media pathology report (3D specimen map) which integrates 3D and conventional pathology reporting methods. Finally, we describe the potential utility of 3D specimen modeling in both didactic and experiential teaching of gross pathology lab procedures.
{"title":"Ex vivo 3D scanning and specimen mapping in anatomic pathology","authors":"Alexander N. Perez , Kayvon F. Sharif , Erica Guelfi , Sophie Li , Alexis Miller , Kavita Prasad , Robert J. Sinard , James S. Lewis Jr , Michael C. Topf","doi":"10.1016/j.jpi.2022.100186","DOIUrl":"10.1016/j.jpi.2022.100186","url":null,"abstract":"<div><p>Structured light three-dimensional (3D) scanning is a ubiquitous mainstay of object inspection and quality control in industrial manufacturing, and has recently been integrated into various medical disciplines. Photorealistic 3D scans can readily be acquired from fresh or formalin-fixed tissue and have potential for use within anatomic pathology (AP) in a variety of scenarios, ranging from direct clinical care to documentation and education. Methods for scanning and post-processing of fresh surgical specimens rely on relatively low-cost and technically simple procedures. Here, we demonstrate potential use of 3D scanning in surgical pathology in the form of a mixed media pathology report with a novel post-scan virtual inking and marking technique to precisely demarcate areas of tissue sectioning and details of final tumor and margin status. We display a sample mixed-media pathology report (3D specimen map) which integrates 3D and conventional pathology reporting methods. Finally, we describe the potential utility of 3D specimen modeling in both didactic and experiential teaching of gross pathology lab procedures.</p></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":"14 ","pages":"Article 100186"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9852486/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10584107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1016/j.jpi.2023.100339
Valeria Ariotta , Oskari Lehtonen , Shams Salloum , Giulia Micoli , Kari Lavikka , Ville Rantanen , Johanna Hynninen , Anni Virtanen , Sampsa Hautaniemi
Detecting cell types from histopathological images is essential for various digital pathology applications. However, large number of cells in whole-slide images (WSIs) necessitates automated analysis pipelines for efficient cell type detection. Herein, we present hematoxylin and eosin (H&E) Image Processing pipeline (HEIP) for automatied analysis of scanned H&E-stained slides. HEIP is a flexible and modular open-source software that performs preprocessing, instance segmentation, and nuclei feature extraction. To evaluate the performance of HEIP, we applied it to extract cell types from ovarian high-grade serous carcinoma (HGSC) patient WSIs. HEIP showed high precision in instance segmentation, particularly for neoplastic and epithelial cells. We also show that there is a significant correlation between genomic ploidy values and morphological features, such as major axis of the nucleus.
{"title":"H&E image analysis pipeline for quantifying morphological features","authors":"Valeria Ariotta , Oskari Lehtonen , Shams Salloum , Giulia Micoli , Kari Lavikka , Ville Rantanen , Johanna Hynninen , Anni Virtanen , Sampsa Hautaniemi","doi":"10.1016/j.jpi.2023.100339","DOIUrl":"https://doi.org/10.1016/j.jpi.2023.100339","url":null,"abstract":"<div><p>Detecting cell types from histopathological images is essential for various digital pathology applications. However, large number of cells in whole-slide images (WSIs) necessitates automated analysis pipelines for efficient cell type detection. Herein, we present hematoxylin and eosin (H&E) Image Processing pipeline (HEIP) for automatied analysis of scanned H&E-stained slides. HEIP is a flexible and modular open-source software that performs preprocessing, instance segmentation, and nuclei feature extraction. To evaluate the performance of HEIP, we applied it to extract cell types from ovarian high-grade serous carcinoma (HGSC) patient WSIs. HEIP showed high precision in instance segmentation, particularly for neoplastic and epithelial cells. We also show that there is a significant correlation between genomic ploidy values and morphological features, such as major axis of the nucleus.</p></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":"14 ","pages":"Article 100339"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49858221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1016/j.jpi.2023.100341
Md. Mahbubur Rahman , Mostofa Kamal Nasir , Md. Nur-A-Alam , Md. Saikat Islam Khan
Skin cancer is among the most common cancer types worldwide. Automatic identification of skin cancer is complicated because of the poor contrast and apparent resemblance between skin and lesions. The rate of human death can be significantly reduced if melanoma skin cancer could be detected quickly using dermoscopy images. This research uses an anisotropic diffusion filtering method on dermoscopy images to remove multiplicative speckle noise. To do this, the fast-bounding box (FBB) method is applied here to segment the skin cancer region. We also employ 2 feature extractors to represent images. The first one is the Hybrid Feature Extractor (HFE), and second one is the convolutional neural network VGG19-based CNN. The HFE combines 3 feature extraction approaches namely, Histogram-Oriented Gradient (HOG), Local Binary Pattern (LBP), and Speed Up Robust Feature (SURF) into a single fused feature vector. The CNN method is also used to extract additional features from test and training datasets. This 2-feature vector is then fused to design the classification model. The proposed method is then employed on 2 datasets namely, ISIC 2017 and the academic torrents dataset. Our proposed method achieves 99.85%, 91.65%, and 95.70% in terms of accuracy, sensitivity, and specificity, respectively, making it more successful than previously proposed machine learning algorithms.
{"title":"Proposing a hybrid technique of feature fusion and convolutional neural network for melanoma skin cancer detection","authors":"Md. Mahbubur Rahman , Mostofa Kamal Nasir , Md. Nur-A-Alam , Md. Saikat Islam Khan","doi":"10.1016/j.jpi.2023.100341","DOIUrl":"https://doi.org/10.1016/j.jpi.2023.100341","url":null,"abstract":"<div><p>Skin cancer is among the most common cancer types worldwide. Automatic identification of skin cancer is complicated because of the poor contrast and apparent resemblance between skin and lesions. The rate of human death can be significantly reduced if melanoma skin cancer could be detected quickly using dermoscopy images. This research uses an anisotropic diffusion filtering method on dermoscopy images to remove multiplicative speckle noise. To do this, the fast-bounding box (FBB) method is applied here to segment the skin cancer region. We also employ 2 feature extractors to represent images. The first one is the Hybrid Feature Extractor (HFE), and second one is the convolutional neural network VGG19-based CNN. The HFE combines 3 feature extraction approaches namely, Histogram-Oriented Gradient (HOG), Local Binary Pattern (LBP), and Speed Up Robust Feature (SURF) into a single fused feature vector. The CNN method is also used to extract additional features from test and training datasets. This 2-feature vector is then fused to design the classification model. The proposed method is then employed on 2 datasets namely, ISIC 2017 and the academic torrents dataset. Our proposed method achieves 99.85%, 91.65%, and 95.70% in terms of accuracy, sensitivity, and specificity, respectively, making it more successful than previously proposed machine learning algorithms.</p></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":"14 ","pages":"Article 100341"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2153353923001554/pdfft?md5=05d2a723e55b6fa38d611a914a8c9ed2&pid=1-s2.0-S2153353923001554-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92014657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1016/j.jpi.2023.100325
{"title":"Pathology Informatics Summit 2022 David L. Lawrence Convention Center May 9-12 Pittsburgh, PA","authors":"","doi":"10.1016/j.jpi.2023.100325","DOIUrl":"https://doi.org/10.1016/j.jpi.2023.100325","url":null,"abstract":"","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":"14 ","pages":"Article 100325"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2153353923001396/pdfft?md5=c46063ee468d72a70da68565b9981e16&pid=1-s2.0-S2153353923001396-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138466642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1016/j.jpi.2023.100299
Dibson D. Gondim , Khaleel I. Al-Obaidy , Muhammad T. Idrees , John N. Eble , Liang Cheng
Artificial intelligence (AI)-based techniques are increasingly being explored as an emerging ancillary technique for improving accuracy and reproducibility of histopathological diagnosis. Renal cell carcinoma (RCC) is a malignancy responsible for 2% of cancer deaths worldwide. Given that RCC is a heterogenous disease, accurate histopathological classification is essential to separate aggressive subtypes from indolent ones and benign mimickers. There are early promising results using AI for RCC classification to distinguish between 2 and 3 subtypes of RCC. However, it is not clear how an AI-based model designed for multiple subtypes of RCCs, and benign mimickers would perform which is a scenario closer to the real practice of pathology. A computational model was created using 252 whole slide images (WSI) (clear cell RCC: 56, papillary RCC: 81, chromophobe RCC: 51, clear cell papillary RCC: 39, and, metanephric adenoma: 6). 298,071 patches were used to develop the AI-based image classifier. 298,071 patches (350 × 350-pixel) were used to develop the AI-based image classifier. The model was applied to a secondary dataset and demonstrated that 47/55 (85%) WSIs were correctly classified. This computational model showed excellent results except to distinguish clear cell RCC from clear cell papillary RCC. Further validation using multi-institutional large datasets and prospective studies are needed to determine the potential to translation to clinical practice.
{"title":"Artificial intelligence-based multi-class histopathologic classification of kidney neoplasms","authors":"Dibson D. Gondim , Khaleel I. Al-Obaidy , Muhammad T. Idrees , John N. Eble , Liang Cheng","doi":"10.1016/j.jpi.2023.100299","DOIUrl":"10.1016/j.jpi.2023.100299","url":null,"abstract":"<div><p>Artificial intelligence (AI)-based techniques are increasingly being explored as an emerging ancillary technique for improving accuracy and reproducibility of histopathological diagnosis. Renal cell carcinoma (RCC) is a malignancy responsible for 2% of cancer deaths worldwide. Given that RCC is a heterogenous disease, accurate histopathological classification is essential to separate aggressive subtypes from indolent ones and benign mimickers. There are early promising results using AI for RCC classification to distinguish between 2 and 3 subtypes of RCC. However, it is not clear how an AI-based model designed for multiple subtypes of RCCs, and benign mimickers would perform which is a scenario closer to the real practice of pathology. A computational model was created using 252 whole slide images (WSI) (clear cell RCC: 56, papillary RCC: 81, chromophobe RCC: 51, clear cell papillary RCC: 39, and, metanephric adenoma: 6). 298,071 patches were used to develop the AI-based image classifier. 298,071 patches (350 × 350-pixel) were used to develop the AI-based image classifier. The model was applied to a secondary dataset and demonstrated that 47/55 (85%) WSIs were correctly classified. This computational model showed excellent results except to distinguish clear cell RCC from clear cell papillary RCC. Further validation using multi-institutional large datasets and prospective studies are needed to determine the potential to translation to clinical practice.</p></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":"14 ","pages":"Article 100299"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10006494/pdf/main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9114263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1016/j.jpi.2023.100298
Martin-Leo Hansmann , Frederick Klauschen , Wojciech Samek , Klaus-Robert Müller , Emmanuel Donnadieu , Sonja Scharf , Sylvia Hartmann , Ina Koch , Jörg Ackermann , Liron Pantanowitz , Hendrik Schäfer , Patrick Wurzel
In recent years, medical disciplines have moved closer together and rigid borders have been increasingly dissolved. The synergetic advantage of combining multiple disciplines is particularly important for radiology, nuclear medicine, and pathology to perform integrative diagnostics. In this review, we discuss how medical subdisciplines can be reintegrated in the future using state-of-the-art methods of digitization, data science, and machine learning. Integration of methods is made possible by the digitalization of radiological and nuclear medical images, as well as pathological images. 3D histology can become a valuable tool, not only for integration into radiological images but also for the visualization of cellular interactions, the so-called connectomes. In human pathology, it has recently become possible to image and calculate the movements and contacts of immunostained cells in fresh tissue explants. Recording the movement of a living cell is proving to be informative and makes it possible to study dynamic connectomes in the diagnosis of lymphoid tissue. By applying computational methods including data science and machine learning, new perspectives for analyzing and understanding diseases become possible.
{"title":"Imaging bridges pathology and radiology","authors":"Martin-Leo Hansmann , Frederick Klauschen , Wojciech Samek , Klaus-Robert Müller , Emmanuel Donnadieu , Sonja Scharf , Sylvia Hartmann , Ina Koch , Jörg Ackermann , Liron Pantanowitz , Hendrik Schäfer , Patrick Wurzel","doi":"10.1016/j.jpi.2023.100298","DOIUrl":"10.1016/j.jpi.2023.100298","url":null,"abstract":"<div><p>In recent years, medical disciplines have moved closer together and rigid borders have been increasingly dissolved. The synergetic advantage of combining multiple disciplines is particularly important for radiology, nuclear medicine, and pathology to perform integrative diagnostics. In this review, we discuss how medical subdisciplines can be reintegrated in the future using state-of-the-art methods of digitization, data science, and machine learning. Integration of methods is made possible by the digitalization of radiological and nuclear medical images, as well as pathological images. 3D histology can become a valuable tool, not only for integration into radiological images but also for the visualization of cellular interactions, the so-called connectomes. In human pathology, it has recently become possible to image and calculate the movements and contacts of immunostained cells in fresh tissue explants. Recording the movement of a living cell is proving to be informative and makes it possible to study dynamic connectomes in the diagnosis of lymphoid tissue. By applying computational methods including data science and machine learning, new perspectives for analyzing and understanding diseases become possible.</p></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":"14 ","pages":"Article 100298"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/b9/5d/main.PMC9958472.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10281597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite the promising applications of whole-slide imaging (WSI) for frozen section (FS) diagnosis, its adoption for remote reporting is limited.
Objective
To assess the feasibility and performance of home-based remote digital consultation for FS diagnosis.
Material & Method
Cases accessioned beyond regular working hours (5 pm–10 pm) were reported simultaneously using optical microscopy (OM) and WSI. Validation of WSI for FS diagnosis from a remote site, i.e. home, was performed by 5 pathologists. Cases were scanned using a portable scanner (Grundium Ocus®40) and previewed on consumer-grade computer devices through a web-based browser (http://grundium.net). Clinical data and diagnostic reports were shared through a google spreadsheet. The diagnostic concordance, inter- and intra-observer agreement for FS diagnosis by WSI versus OM, and turnaround time (TAT), were recorded.
Results
The overall diagnostic accuracy for OM and WSI (from home) was 98.2% (range 97%–100%) and 97.6% (range 95%–99%), respectively, when compared with the reference standard. Almost perfect inter-observer (k = 0.993) and intra-observer (k = 0.987) agreement for WSI was observed by 4 pathologists. Pathologists used consumer-grade laptops/desktops with an average screen size of 14.58 inches (range = 12.3–17.7 inches) and a network speed of 64 megabits per second (range: 10–90 Mbps). The mean diagnostic assessment time per case for OM and WSI was 1:48 min and 5:54 min, respectively. Mean TAT of 27.27 min per case was observed using WSI from home. Seamless connectivity was observed in approximately 75% of cases.
Conclusion
This study validates the role of WSI for remote FS diagnosis for its safe and efficient adoption in clinical use.
{"title":"Validation of Remote Digital Pathology based diagnostic reporting of Frozen Sections from home","authors":"Rajiv Kumar Kaushal, Subhash Yadav, Ayushi Sahay, Nupur Karnik, Tushar Agrawal, Vinayak Dave, Nikhil Singh, Ashish Shah, Sangeeta B. Desai","doi":"10.1016/j.jpi.2023.100312","DOIUrl":"10.1016/j.jpi.2023.100312","url":null,"abstract":"<div><h3>Background</h3><p>Despite the promising applications of whole-slide imaging (WSI) for frozen section (FS) diagnosis, its adoption for remote reporting is limited.</p></div><div><h3>Objective</h3><p>To assess the feasibility and performance of home-based remote digital consultation for FS diagnosis.</p></div><div><h3>Material & Method</h3><p>Cases accessioned beyond regular working hours (5 pm–10 pm) were reported simultaneously using optical microscopy (OM) and WSI. Validation of WSI for FS diagnosis from a remote site, i.e. home, was performed by 5 pathologists. Cases were scanned using a portable scanner (Grundium Ocus®40) and previewed on consumer-grade computer devices through a web-based browser (<span>http://grundium.net</span><svg><path></path></svg>). Clinical data and diagnostic reports were shared through a google spreadsheet. The diagnostic concordance, inter- and intra-observer agreement for FS diagnosis by WSI versus OM, and turnaround time (TAT), were recorded.</p></div><div><h3>Results</h3><p>The overall diagnostic accuracy for OM and WSI (from home) was 98.2% (range 97%–100%) and 97.6% (range 95%–99%), respectively, when compared with the reference standard. Almost perfect inter-observer (k = 0.993) and intra-observer (k = 0.987) agreement for WSI was observed by 4 pathologists. Pathologists used consumer-grade laptops/desktops with an average screen size of 14.58 inches (range = 12.3–17.7 inches) and a network speed of 64 megabits per second (range: 10–90 Mbps). The mean diagnostic assessment time per case for OM and WSI was 1:48 min and 5:54 min, respectively. Mean TAT of 27.27 min per case was observed using WSI from home. Seamless connectivity was observed in approximately 75% of cases.</p></div><div><h3>Conclusion</h3><p>This study validates the role of WSI for remote FS diagnosis for its safe and efficient adoption in clinical use.</p></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":"14 ","pages":"Article 100312"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/81/ac/main.PMC10192998.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9496429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1016/j.jpi.2023.100195
Maryam Berijanian , Nadine S. Schaadt , Boqiang Huang , Johannes Lotz , Friedrich Feuerhake , Dorit Merhof
Background
Deep learning tasks, which require large numbers of images, are widely applied in digital pathology. This poses challenges especially for supervised tasks since manual image annotation is an expensive and laborious process. This situation deteriorates even more in the case of a large variability of images. Coping with this problem requires methods such as image augmentation and synthetic image generation. In this regard, unsupervised stain translation via GANs has gained much attention recently, but a separate network must be trained for each pair of source and target domains. This work enables unsupervised many-to-many translation of histopathological stains with a single network while seeking to maintain the shape and structure of the tissues.
Methods
StarGAN-v2 is adapted for unsupervised many-to-many stain translation of histopathology images of breast tissues. An edge detector is incorporated to motivate the network to maintain the shape and structure of the tissues and to have an edge-preserving translation. Additionally, a subjective test is conducted on medical and technical experts in the field of digital pathology to evaluate the quality of generated images and to verify that they are indistinguishable from real images. As a proof of concept, breast cancer classifiers are trained with and without the generated images to quantify the effect of image augmentation using the synthetized images on classification accuracy.
Results
The results show that adding an edge detector helps to improve the quality of translated images and to preserve the general structure of tissues. Quality control and subjective tests on our medical and technical experts show that the real and artificial images cannot be distinguished, thereby confirming that the synthetic images are technically plausible. Moreover, this research shows that, by augmenting the training dataset with the outputs of the proposed stain translation method, the accuracy of breast cancer classifier with ResNet-50 and VGG-16 improves by 8.0% and 9.3%, respectively.
Conclusions
This research indicates that a translation from an arbitrary source stain to other stains can be performed effectively within the proposed framework. The generated images are realistic and could be employed to train deep neural networks to improve their performance and cope with the problem of insufficient numbers of annotated images.
{"title":"Unsupervised many-to-many stain translation for histological image augmentation to improve classification accuracy","authors":"Maryam Berijanian , Nadine S. Schaadt , Boqiang Huang , Johannes Lotz , Friedrich Feuerhake , Dorit Merhof","doi":"10.1016/j.jpi.2023.100195","DOIUrl":"10.1016/j.jpi.2023.100195","url":null,"abstract":"<div><h3>Background</h3><p>Deep learning tasks, which require large numbers of images, are widely applied in digital pathology. This poses challenges especially for supervised tasks since manual image annotation is an expensive and laborious process. This situation deteriorates even more in the case of a large variability of images. Coping with this problem requires methods such as image augmentation and synthetic image generation. In this regard, unsupervised stain translation via GANs has gained much attention recently, but a separate network must be trained for each pair of source and target domains. This work enables unsupervised many-to-many translation of histopathological stains with a single network while seeking to maintain the shape and structure of the tissues.</p></div><div><h3>Methods</h3><p>StarGAN-v2 is adapted for unsupervised many-to-many stain translation of histopathology images of breast tissues. An edge detector is incorporated to motivate the network to maintain the shape and structure of the tissues and to have an edge-preserving translation. Additionally, a subjective test is conducted on medical and technical experts in the field of digital pathology to evaluate the quality of generated images and to verify that they are indistinguishable from real images. As a proof of concept, breast cancer classifiers are trained with and without the generated images to quantify the effect of image augmentation using the synthetized images on classification accuracy.</p></div><div><h3>Results</h3><p>The results show that adding an edge detector helps to improve the quality of translated images and to preserve the general structure of tissues. Quality control and subjective tests on our medical and technical experts show that the real and artificial images cannot be distinguished, thereby confirming that the synthetic images are technically plausible. Moreover, this research shows that, by augmenting the training dataset with the outputs of the proposed stain translation method, the accuracy of breast cancer classifier with ResNet-50 and VGG-16 improves by 8.0% and 9.3%, respectively.</p></div><div><h3>Conclusions</h3><p>This research indicates that a translation from an arbitrary source stain to other stains can be performed effectively within the proposed framework. The generated images are realistic and could be employed to train deep neural networks to improve their performance and cope with the problem of insufficient numbers of annotated images.</p></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":"14 ","pages":"Article 100195"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/a5/6e/main.PMC9947329.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9356483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Microscopic examination of biopsy tissue slides is perceived as the gold-standard methodology for the confirmation of presence of cancer cells. Manual analysis of an overwhelming inflow of tissue slides is highly susceptible to misreading of tissue slides by pathologists. A computerized framework for histopathology image analysis is conceived as a diagnostic tool that greatly benefits pathologists, augmenting definitive diagnosis of cancer. Convolutional Neural Network (CNN) turned out to be the most adaptable and effective technique in the detection of abnormal pathologic histology. Despite their high sensitivity and predictive power, clinical translation is constrained by a lack of intelligible insights into the prediction. A computer-aided system that can offer a definitive diagnosis and interpretability is therefore highly desirable. Conventional visual explanatory techniques, Class Activation Mapping (CAM), combined with CNN models offers interpretable decision making. The major challenge in CAM is, it cannot be optimized to create the best visualization map. CAM also decreases the performance of the CNN models.
To address this challenge, we introduce a novel interpretable decision-support model using CNN with a trainable attention mechanism using response-based feed-forward visual explanation. We introduce a variant of DarkNet19 CNN model for the classification of histopathology images. In order to achieve visual interpretation as well as boost the performance of the DarkNet19 model, an attention branch is integrated with DarkNet19 network forming Attention Branch Network (ABN). The attention branch uses a convolution layer of DarkNet19 and Global Average Pooling (GAP) to model the context of the visual features and generate a heatmap to identify the region of interest. Finally, the perception branch is constituted using a fully connected layer to classify images.
We trained and validated our model using more than 7000 breast cancer biopsy slide images from an openly available dataset and achieved 98.7% accuracy in the binary classification of histopathology images. The observations substantiated the enhanced clinical interpretability of the DarkNet19 CNN model, supervened by the attention branch, besides delivering a 3%–4% performance boost of the baseline model. The cancer regions highlighted by the proposed model correlate well with the findings of an expert pathologist.
The coalesced approach of unifying attention branch with the CNN model capacitates pathologists with augmented diagnostic interpretability of histological images with no detriment to state-of-art performance. The model’s proficiency in pinpointing the region of interest is an added bonus that can lead to accurate clinical translation of deep learning models that underscore clinical decision support.
{"title":"An interpretable decision-support model for breast cancer diagnosis using histopathology images","authors":"Sruthi Krishna , S.S. Suganthi , Arnav Bhavsar , Jyotsna Yesodharan , Shivsubramani Krishnamoorthy","doi":"10.1016/j.jpi.2023.100319","DOIUrl":"10.1016/j.jpi.2023.100319","url":null,"abstract":"<div><p>Microscopic examination of biopsy tissue slides is perceived as the gold-standard methodology for the confirmation of presence of cancer cells. Manual analysis of an overwhelming inflow of tissue slides is highly susceptible to misreading of tissue slides by pathologists. A computerized framework for histopathology image analysis is conceived as a diagnostic tool that greatly benefits pathologists, augmenting definitive diagnosis of cancer. Convolutional Neural Network (CNN) turned out to be the most adaptable and effective technique in the detection of abnormal pathologic histology. Despite their high sensitivity and predictive power, clinical translation is constrained by a lack of intelligible insights into the prediction. A computer-aided system that can offer a definitive diagnosis and interpretability is therefore highly desirable. Conventional visual explanatory techniques, Class Activation Mapping (CAM), combined with CNN models offers interpretable decision making. The major challenge in CAM is, it cannot be optimized to create the best visualization map. CAM also decreases the performance of the CNN models.</p><p>To address this challenge, we introduce a novel interpretable decision-support model using CNN with a trainable attention mechanism using response-based feed-forward visual explanation. We introduce a variant of DarkNet19 CNN model for the classification of histopathology images. In order to achieve visual interpretation as well as boost the performance of the DarkNet19 model, an attention branch is integrated with DarkNet19 network forming Attention Branch Network (ABN). The attention branch uses a convolution layer of DarkNet19 and Global Average Pooling (GAP) to model the context of the visual features and generate a heatmap to identify the region of interest. Finally, the perception branch is constituted using a fully connected layer to classify images.</p><p>We trained and validated our model using more than 7000 breast cancer biopsy slide images from an openly available dataset and achieved 98.7% accuracy in the binary classification of histopathology images. The observations substantiated the enhanced clinical interpretability of the DarkNet19 CNN model, supervened by the attention branch, besides delivering a 3%–4% performance boost of the baseline model. The cancer regions highlighted by the proposed model correlate well with the findings of an expert pathologist.</p><p>The coalesced approach of unifying attention branch with the CNN model capacitates pathologists with augmented diagnostic interpretability of histological images with no detriment to state-of-art performance. The model’s proficiency in pinpointing the region of interest is an added bonus that can lead to accurate clinical translation of deep learning models that underscore clinical decision support.</p></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":"14 ","pages":"Article 100319"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10320615/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9806867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1016/j.jpi.2023.100338
Steven N. Hart , Noah G. Hoffman , Peter Gershkovich , Chancey Christenson , David S. McClintock , Lauren J. Miller , Ronald Jackups , Vahid Azimi , Nicholas Spies , Victor Brodsky
In this paper, we consider the current and potential role of the latest generation of Large Language Models (LLMs) in medical informatics, particularly within the realms of clinical and anatomic pathology. We aim to provide a thorough understanding of the considerations that arise when employing LLMs in healthcare settings, such as determining appropriate use cases and evaluating the advantages and limitations of these models.
Furthermore, this paper will consider the infrastructural and organizational requirements necessary for the successful implementation and utilization of LLMs in healthcare environments. We will discuss the importance of addressing education, security, bias, and privacy concerns associated with LLMs in clinical informatics, as well as the need for a robust framework to overcome regulatory, compliance, and legal challenges.
{"title":"Organizational preparedness for the use of large language models in pathology informatics","authors":"Steven N. Hart , Noah G. Hoffman , Peter Gershkovich , Chancey Christenson , David S. McClintock , Lauren J. Miller , Ronald Jackups , Vahid Azimi , Nicholas Spies , Victor Brodsky","doi":"10.1016/j.jpi.2023.100338","DOIUrl":"10.1016/j.jpi.2023.100338","url":null,"abstract":"<div><p>In this paper, we consider the current and potential role of the latest generation of Large Language Models (LLMs) in medical informatics, particularly within the realms of clinical and anatomic pathology. We aim to provide a thorough understanding of the considerations that arise when employing LLMs in healthcare settings, such as determining appropriate use cases and evaluating the advantages and limitations of these models.</p><p>Furthermore, this paper will consider the infrastructural and organizational requirements necessary for the successful implementation and utilization of LLMs in healthcare environments. We will discuss the importance of addressing education, security, bias, and privacy concerns associated with LLMs in clinical informatics, as well as the need for a robust framework to overcome regulatory, compliance, and legal challenges.</p></div>","PeriodicalId":37769,"journal":{"name":"Journal of Pathology Informatics","volume":"14 ","pages":"Article 100338"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10582733/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49683190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}