{"title":"Deep Learning and Explainable Machine Learning on Hair Disease Detection","authors":"W. Heng, N. A. Abdul-Kadir","doi":"10.1109/ECBIOS57802.2023.10218472","DOIUrl":null,"url":null,"abstract":"Deep learning algorithms have been widely used for various healthcare research because it helps eliminate the need for manual feature extraction which requires specialist expertise and is time-consuming. However, deep learning models have low interpretability in their classification results and hence low trust and practical usage in clinical settings. To overcome this reliability issue, explainable machine learning (XAI) can be used to understand the effect of the different networks and the extracted features on the classification results. In this study, multiple convolutional neural networks were trained and tested on hairy scalp images for the detection of hair diseases. In addition to standard performance metrics including accuracy, sensitivity, and specificity, we further investigated the models' interpretability using three XAI techniques including Local Interpretable Model-Agnostic Explanations, Gradient-weighted Class Activation Mapping, and occlusion sensitivity. The result of using XAI techniques revealed that the model's high classification accuracy did not necessarily coincide with its applicability or practicality. The application of XAI techniques in this study provided valuable insights into the contributions made by different groups of pixels to the model's decision-making process. This method helped identify potential model biases, which could then be utilized to facilitate informed adjustments for the improvement of the model's robustness.","PeriodicalId":334600,"journal":{"name":"2023 IEEE 5th Eurasia Conference on Biomedical Engineering, Healthcare and Sustainability (ECBIOS)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 5th Eurasia Conference on Biomedical Engineering, Healthcare and Sustainability (ECBIOS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ECBIOS57802.2023.10218472","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Deep learning algorithms have been widely used for various healthcare research because it helps eliminate the need for manual feature extraction which requires specialist expertise and is time-consuming. However, deep learning models have low interpretability in their classification results and hence low trust and practical usage in clinical settings. To overcome this reliability issue, explainable machine learning (XAI) can be used to understand the effect of the different networks and the extracted features on the classification results. In this study, multiple convolutional neural networks were trained and tested on hairy scalp images for the detection of hair diseases. In addition to standard performance metrics including accuracy, sensitivity, and specificity, we further investigated the models' interpretability using three XAI techniques including Local Interpretable Model-Agnostic Explanations, Gradient-weighted Class Activation Mapping, and occlusion sensitivity. The result of using XAI techniques revealed that the model's high classification accuracy did not necessarily coincide with its applicability or practicality. The application of XAI techniques in this study provided valuable insights into the contributions made by different groups of pixels to the model's decision-making process. This method helped identify potential model biases, which could then be utilized to facilitate informed adjustments for the improvement of the model's robustness.