{"title":"Early Esophageal Malignancy Detection Using Deep Transfer Learning and Explainable AI","authors":"Priti Shaw, Suresh Sankaranarayanan, P. Lorenz","doi":"10.1109/ICCIS56375.2022.9998162","DOIUrl":null,"url":null,"abstract":"Esophageal malignancy is a rare form of cancer that starts in the esophagus and spreads to the other parts of the body, impacting a severe risk on the liver, lungs, lymph nodes, and stomach. Studies have shown that esophageal cancer is one of the most prevalent causes of cancer mortality. In 2020, 604100 individuals have been diagnosed with this deadly disease. There are a good number of medical studies, carried out on this topic, every year. A similar focus is also imparted on the AI-based deep learning models for the classification of malignancy. But the challenge is that the AI models are all complex and lack transparency. There is no available information to explain the opacity of such models. And as AI-based medical research seeks reliability, it becomes very important to bring in explainability. So we, through this research, have used Explainable AI(XAI) entitled LIME for creating trust-based models for the early detection of esophageal malignancy. We have used a simple CNN model and several transfer learning-based models, for this study. We have taken the actual endoscopic images from the Kvasir-v2 dataset resulting in an accuracy of 88.75%. with the DenseNet-201 model followed by the usage of an Explainable AI model, Lime, for giving an explanation for the images classified. The deep learning model, combined with explainable AI, helps in getting a clear picture of the regions contributing toward the malignancy prediction and promotes confidence in the model, without the intervention of a domain expert.","PeriodicalId":398546,"journal":{"name":"2022 6th International Conference on Communication and Information Systems (ICCIS)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 6th International Conference on Communication and Information Systems (ICCIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCIS56375.2022.9998162","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Esophageal malignancy is a rare form of cancer that starts in the esophagus and spreads to the other parts of the body, impacting a severe risk on the liver, lungs, lymph nodes, and stomach. Studies have shown that esophageal cancer is one of the most prevalent causes of cancer mortality. In 2020, 604100 individuals have been diagnosed with this deadly disease. There are a good number of medical studies, carried out on this topic, every year. A similar focus is also imparted on the AI-based deep learning models for the classification of malignancy. But the challenge is that the AI models are all complex and lack transparency. There is no available information to explain the opacity of such models. And as AI-based medical research seeks reliability, it becomes very important to bring in explainability. So we, through this research, have used Explainable AI(XAI) entitled LIME for creating trust-based models for the early detection of esophageal malignancy. We have used a simple CNN model and several transfer learning-based models, for this study. We have taken the actual endoscopic images from the Kvasir-v2 dataset resulting in an accuracy of 88.75%. with the DenseNet-201 model followed by the usage of an Explainable AI model, Lime, for giving an explanation for the images classified. The deep learning model, combined with explainable AI, helps in getting a clear picture of the regions contributing toward the malignancy prediction and promotes confidence in the model, without the intervention of a domain expert.