{"title":"Adaptive Spotted Hyena Optimizer-enabled Deep QNN for Laryngeal Cancer Classification","authors":"M. N. Sachane, S. A. Patil","doi":"10.1109/ICECAA55415.2022.9936500","DOIUrl":null,"url":null,"abstract":"Laryngeal Cancer (LCA) is one of the predominant cancers found commonly among people around world that affects the head and neck region of humans. The change in patient’s voice is the early symptom of LCA and diagnosis the LCA at the primary stages is necessary to decrease the morbidity rate. Usage of laryngeal endoscopic images for automatic laryngeal cancer detection is advantageous in additional evaluation of the tumor structures and its characteristics enable to improve the quality of treatment, like computed aided surgery. Though, only fewer methods exist for detecting laryngeal cancer automatically, but increasing the performance still results a major challenge. In order to detect the laryngeal cancer automatically, this research proposes an effectual model for laryngeal cancer classification using proposed Adaptive Spotted Hyena Optimizer-based Deep Quantum Neural Network (ASHO-based Deep QNN). Here, the pre-processing is effectively done using Gaussian filtering and features, such as Spider Local Image Feature (SLIF), Gradient Binary Pattern (GBP), and Histogram of Gradients (HOG) are refined efficiently to enhance the performance of the model. Finally, classification is accomplished with the Deep QNN, wherein the introduced ASHO is made use of to tune the network classifier. The ASHO is devised by inheriting the benefits of Adaptive concept with Spotted Hyena Optimizer (SHO). Meanwhile, the proposed ASHO-based Deep QNN has achieved maximum values of accuracy, sensitivity, as well as specificity at 0.948, 0.952, and 0.924, respectively.","PeriodicalId":273850,"journal":{"name":"2022 International Conference on Edge Computing and Applications (ICECAA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Edge Computing and Applications (ICECAA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICECAA55415.2022.9936500","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Laryngeal Cancer (LCA) is one of the predominant cancers found commonly among people around world that affects the head and neck region of humans. The change in patient’s voice is the early symptom of LCA and diagnosis the LCA at the primary stages is necessary to decrease the morbidity rate. Usage of laryngeal endoscopic images for automatic laryngeal cancer detection is advantageous in additional evaluation of the tumor structures and its characteristics enable to improve the quality of treatment, like computed aided surgery. Though, only fewer methods exist for detecting laryngeal cancer automatically, but increasing the performance still results a major challenge. In order to detect the laryngeal cancer automatically, this research proposes an effectual model for laryngeal cancer classification using proposed Adaptive Spotted Hyena Optimizer-based Deep Quantum Neural Network (ASHO-based Deep QNN). Here, the pre-processing is effectively done using Gaussian filtering and features, such as Spider Local Image Feature (SLIF), Gradient Binary Pattern (GBP), and Histogram of Gradients (HOG) are refined efficiently to enhance the performance of the model. Finally, classification is accomplished with the Deep QNN, wherein the introduced ASHO is made use of to tune the network classifier. The ASHO is devised by inheriting the benefits of Adaptive concept with Spotted Hyena Optimizer (SHO). Meanwhile, the proposed ASHO-based Deep QNN has achieved maximum values of accuracy, sensitivity, as well as specificity at 0.948, 0.952, and 0.924, respectively.