{"title":"Vision Transformer Based COVID-19 Detection Using Chest CT-scan images","authors":"P. Sahoo, S. Saha, S. Mondal, Suraj Gowda","doi":"10.1109/BHI56158.2022.9926823","DOIUrl":null,"url":null,"abstract":"The fast proliferation of the coronavirus around the globe has put several countries' healthcare systems in danger of collapsing. As a result, locating and separating COVID-19-positive patients is a critical task. Deep Learning approaches were used in several computer-aided automated systems that utilized chest computed tomography (CT-scan) or X-ray images to create diagnostic tools. However, current Convolutional Neural Network (CNN) based approaches cannot capture the global context because of inherent image-specific inductive bias. These techniques also require large and labeled datasets to train the algorithm, but not many labeled COVID-19 datasets exist publicly. To mitigate the problem, we have developed a self-attention-based Vision Transformer (ViT) architecture using CT-scan. The proposed ViT model achieves an accuracy of 98.39% on the popular SARS-CoV-2 datasets, outperforming the existing state-of-the-art CNN-based models by 1%. We also provide the characteristics of CT scan images of the COVID-19-affected patients and an error analysis of the model's outcome. Our findings show that the proposed ViT-based model can be an alternative option for medical professionals for effective COVID-19 screening. The implementation details of the proposed model can be accessed at https://github.com/Pranabiitp/ViT.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BHI56158.2022.9926823","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
The fast proliferation of the coronavirus around the globe has put several countries' healthcare systems in danger of collapsing. As a result, locating and separating COVID-19-positive patients is a critical task. Deep Learning approaches were used in several computer-aided automated systems that utilized chest computed tomography (CT-scan) or X-ray images to create diagnostic tools. However, current Convolutional Neural Network (CNN) based approaches cannot capture the global context because of inherent image-specific inductive bias. These techniques also require large and labeled datasets to train the algorithm, but not many labeled COVID-19 datasets exist publicly. To mitigate the problem, we have developed a self-attention-based Vision Transformer (ViT) architecture using CT-scan. The proposed ViT model achieves an accuracy of 98.39% on the popular SARS-CoV-2 datasets, outperforming the existing state-of-the-art CNN-based models by 1%. We also provide the characteristics of CT scan images of the COVID-19-affected patients and an error analysis of the model's outcome. Our findings show that the proposed ViT-based model can be an alternative option for medical professionals for effective COVID-19 screening. The implementation details of the proposed model can be accessed at https://github.com/Pranabiitp/ViT.