Head and Neck Tumor Histopathological Image Representation with Pr with Pre- Trained Conv ained Convolutional Neur olutional Neural Network and Vision al Network and Vision Transformer
Ranny Rahaningrum Herdiantoputri, D. Komura, Tohru Ikeda, S. Ishikawa
{"title":"Head and Neck Tumor Histopathological Image Representation with Pr with Pre- Trained Conv ained Convolutional Neur olutional Neural Network and Vision al Network and Vision Transformer","authors":"Ranny Rahaningrum Herdiantoputri, D. Komura, Tohru Ikeda, S. Ishikawa","doi":"10.14693/jdi.v30i1.1501","DOIUrl":null,"url":null,"abstract":"Image representation via machine learning is an approach to quantitatively represent histopathological images of head and neck tumors for future applications of artificial intelligence-assisted pathological diagnosis systems. Objective: This study compares image representations produced by a pre-trained convolutional neural network (VGG16) to those produced by a vision transformer (ViT-L/14) in terms of the classification performance of head and neck tumors. Methods: Whole-slide images of five oral tumor categories (n = 319 cases) were analyzed. Image patches were created from manually annotated regions at 4096, 2048, and 1024 pixels and rescaled to 256 pixels. Image representations were classified by logistic regression or multiclass Support Vector Machines for binary or multiclass classifications, respectively. Results: VGG16 with 1024 pixels performed best for benign and malignant salivary gland tumors (BSGT and MSGT) (F1 = 0.703 and 0.803). VGG16 outperformed ViT for BSGT and MSGT with all magnification levels. However, ViT outperformed VGG16 for maxillofacial bone tumors (MBTs), odontogenic cysts (OCs), and odontogenic tumors (OTs) with all magnification levels (F1 = 0.780; 0.874; 0.751). Conclusion: Being more texture-biased, VGG16 performs better in representing BSGT and MSGT","PeriodicalId":53873,"journal":{"name":"Journal of Dentistry Indonesia","volume":" ","pages":""},"PeriodicalIF":0.2000,"publicationDate":"2023-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Dentistry Indonesia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14693/jdi.v30i1.1501","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"DENTISTRY, ORAL SURGERY & MEDICINE","Score":null,"Total":0}
引用次数: 0
Abstract
Image representation via machine learning is an approach to quantitatively represent histopathological images of head and neck tumors for future applications of artificial intelligence-assisted pathological diagnosis systems. Objective: This study compares image representations produced by a pre-trained convolutional neural network (VGG16) to those produced by a vision transformer (ViT-L/14) in terms of the classification performance of head and neck tumors. Methods: Whole-slide images of five oral tumor categories (n = 319 cases) were analyzed. Image patches were created from manually annotated regions at 4096, 2048, and 1024 pixels and rescaled to 256 pixels. Image representations were classified by logistic regression or multiclass Support Vector Machines for binary or multiclass classifications, respectively. Results: VGG16 with 1024 pixels performed best for benign and malignant salivary gland tumors (BSGT and MSGT) (F1 = 0.703 and 0.803). VGG16 outperformed ViT for BSGT and MSGT with all magnification levels. However, ViT outperformed VGG16 for maxillofacial bone tumors (MBTs), odontogenic cysts (OCs), and odontogenic tumors (OTs) with all magnification levels (F1 = 0.780; 0.874; 0.751). Conclusion: Being more texture-biased, VGG16 performs better in representing BSGT and MSGT