Pub Date : 2022-07-08DOI: 10.1109/CONECCT55679.2022.9865757
R. Reyes, Rovenson V. Sevilla, Godofredo S. Zapanta, Jovencio V. Merin, R. R. Maaliw, Al Ferrer Santiago
The study provides a practical solution to the concern of detecting safety gear compliance in construction. This is imperative given that safety in the construction work environment is one of the greatest global concerns, and advancements in deep learning algorithms, especially in the area of machine learning and database management, enable the possibility to address this challenge in construction. This study developed a framework to recognize construction personnel's safety compliance with PPE, which is designed to be implemented into an organization's operational procedure. The Convolutional Neural Network model was constructed by employing machine learning to a basic version of the YOLOv3 deep learning model for the study. On the testing data, the detection method generated an F1 score of 0.9299, with a mean precision-recall rate of 92.99 %. The purpose of this study is to testify to the viability and applicability of machine vision-based methodologies for automated safety-related compliance processes on construction sites.
{"title":"Safety Gear Compliance Detection Using Data Augmentation-Assisted Transfer Learning in Construction Work Environment","authors":"R. Reyes, Rovenson V. Sevilla, Godofredo S. Zapanta, Jovencio V. Merin, R. R. Maaliw, Al Ferrer Santiago","doi":"10.1109/CONECCT55679.2022.9865757","DOIUrl":"https://doi.org/10.1109/CONECCT55679.2022.9865757","url":null,"abstract":"The study provides a practical solution to the concern of detecting safety gear compliance in construction. This is imperative given that safety in the construction work environment is one of the greatest global concerns, and advancements in deep learning algorithms, especially in the area of machine learning and database management, enable the possibility to address this challenge in construction. This study developed a framework to recognize construction personnel's safety compliance with PPE, which is designed to be implemented into an organization's operational procedure. The Convolutional Neural Network model was constructed by employing machine learning to a basic version of the YOLOv3 deep learning model for the study. On the testing data, the detection method generated an F1 score of 0.9299, with a mean precision-recall rate of 92.99 %. The purpose of this study is to testify to the viability and applicability of machine vision-based methodologies for automated safety-related compliance processes on construction sites.","PeriodicalId":380005,"journal":{"name":"2022 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130421703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-08DOI: 10.1109/CONECCT55679.2022.9865764
Shobhan Banerjee, B. B. Dash, M. Rath, Tanmaya Swain, Tapaswini Samant
The malware classification task launched by Microsoft has been quite popular for the last half a decade. After detecting malware, classification plays an important role, because based on the type of malware, the corresponding action needs to be taken. Feature extraction plays a vital role to proceed ahead with the modeling. The data is in form of two separate files for each malware in consideration, from which we generate the features, choose the top important ones, and train a classical ensemble learning model. There have been various solutions proposed for this task earlier, over which we have made some modifications to achieve better accuracy. We have used the features generated using the bigram Bag of Words (BOW) and included pixel intensity features to approach this task. Since the dataset is quite huge, in this paper we proposed an approach based on multithreading, where instead of processing the data serially, we processed it parallelly through all the cores available in the CPU and optimize the computation time as much as possible.
微软推出的恶意软件分类任务在过去五年中非常受欢迎。在检测出恶意软件后,分类起着重要的作用,因为根据恶意软件的类型,需要采取相应的措施。特征提取在建模过程中起着至关重要的作用。每个恶意软件的数据以两个独立文件的形式存在,我们从中生成特征,选择最重要的特征,并训练经典的集成学习模型。在此之前,针对此任务已经提出了各种解决方案,我们对其进行了一些修改以获得更好的准确性。我们使用了使用双ram Bag of Words (BOW)生成的特征,并包含了像素强度特征来完成这个任务。由于数据集非常庞大,在本文中我们提出了一种基于多线程的方法,即通过CPU中所有可用的内核并行处理数据,而不是串行处理数据,并尽可能优化计算时间。
{"title":"Malware Classification using Bigram BOW, Pixel Intensity Features, and Multiprocessing","authors":"Shobhan Banerjee, B. B. Dash, M. Rath, Tanmaya Swain, Tapaswini Samant","doi":"10.1109/CONECCT55679.2022.9865764","DOIUrl":"https://doi.org/10.1109/CONECCT55679.2022.9865764","url":null,"abstract":"The malware classification task launched by Microsoft has been quite popular for the last half a decade. After detecting malware, classification plays an important role, because based on the type of malware, the corresponding action needs to be taken. Feature extraction plays a vital role to proceed ahead with the modeling. The data is in form of two separate files for each malware in consideration, from which we generate the features, choose the top important ones, and train a classical ensemble learning model. There have been various solutions proposed for this task earlier, over which we have made some modifications to achieve better accuracy. We have used the features generated using the bigram Bag of Words (BOW) and included pixel intensity features to approach this task. Since the dataset is quite huge, in this paper we proposed an approach based on multithreading, where instead of processing the data serially, we processed it parallelly through all the cores available in the CPU and optimize the computation time as much as possible.","PeriodicalId":380005,"journal":{"name":"2022 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131732803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-08DOI: 10.1109/CONECCT55679.2022.9865107
P. P, Sarvamangala D R
Surplus information on any topic is available in various resources including the World Wide Web, news articles, books, e-books, and blogs. A knowledge seeker might have to spend days together on assimilating the required content from the web. Moreover, most of the content available in multiple resources is repetitive. However, there is also the time constraint which plays a major part during assimilation of the content. Kannada is a regional language spoken in the southern part of India. It has various dialects based on geographic location. The amount of time involved in reading and understanding the Kannada text is user based and involves the language experience of the users. For most of them it is highly challenging and also time consuming. Instead a tool to automatically read the Kannada text content from various sources and summarize it is the need of the day. The proposed model aims to assist the readers by summarizing a given Kannada document. Automatic Kannada text summarization enables users to assimilate to required information from e resources in the shortest possible time. The project aims to build a natural language processing tool to automatically read Kannada text from any e-resource and summarize the same.
{"title":"Kannada Text Summarization using Extractive Technique","authors":"P. P, Sarvamangala D R","doi":"10.1109/CONECCT55679.2022.9865107","DOIUrl":"https://doi.org/10.1109/CONECCT55679.2022.9865107","url":null,"abstract":"Surplus information on any topic is available in various resources including the World Wide Web, news articles, books, e-books, and blogs. A knowledge seeker might have to spend days together on assimilating the required content from the web. Moreover, most of the content available in multiple resources is repetitive. However, there is also the time constraint which plays a major part during assimilation of the content. Kannada is a regional language spoken in the southern part of India. It has various dialects based on geographic location. The amount of time involved in reading and understanding the Kannada text is user based and involves the language experience of the users. For most of them it is highly challenging and also time consuming. Instead a tool to automatically read the Kannada text content from various sources and summarize it is the need of the day. The proposed model aims to assist the readers by summarizing a given Kannada document. Automatic Kannada text summarization enables users to assimilate to required information from e resources in the shortest possible time. The project aims to build a natural language processing tool to automatically read Kannada text from any e-resource and summarize the same.","PeriodicalId":380005,"journal":{"name":"2022 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125319616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-08DOI: 10.1109/CONECCT55679.2022.9865733
Naresh Pajjuri, U. Kumar, Rahisha Thottolil
The wide scale prevalence of diseases in agricultural crops affects both the production quality and quantity of agricultural products at local to regional scale. More often than not, the diseases remain unidentified causing huge distress to the farmers while threatening national food security. In order to circumvent this problem, early diagnosis of diseases using a fast and reliable method is beneficial. Plant disease identification from images captured by digital cameras is an area of active research. Use of various machine learning algorithms for plant disease classification and the evolution of deep convolutional neural network (CNN) based architectures have further enhanced the plant disease classification accuracy. In this context, an automated computer vision-based plant disease detection and classification scheme from plant and leaf’s photographs will be highly desirable. Although, there exist a few techniques currently used in an adhoc fashion for plant disease detection and/or classification, a systematic study to evaluate their usage and efficacy on actual plant data has largely remained unexplored.The aim of this paper is to evaluate various CNN based state-of-the-art transfer learning architectures like GoogLeNet, AlexNet, VGG16 and ResNet50V2 models for plant disease detection and classification. The models were tested on popular publicly available three plant disease benchmark database such as PlantVillage Dataset, New Plant Disease Dataset and Plant Pathology Dataset. Various validation metrics such as Precision, Recall, F1 score and overall accuracy were used to evaluate the final results of the experiments, which revealed that VGG16 rendered highest accuracy of 96.6%, 98.5% and 89% on the three dataset respectively, outperforming all other state-of-the-art models.
{"title":"Comparative Evaluation of the Convolutional Neural Network based Transfer Learning Models for Classification of Plant Disease","authors":"Naresh Pajjuri, U. Kumar, Rahisha Thottolil","doi":"10.1109/CONECCT55679.2022.9865733","DOIUrl":"https://doi.org/10.1109/CONECCT55679.2022.9865733","url":null,"abstract":"The wide scale prevalence of diseases in agricultural crops affects both the production quality and quantity of agricultural products at local to regional scale. More often than not, the diseases remain unidentified causing huge distress to the farmers while threatening national food security. In order to circumvent this problem, early diagnosis of diseases using a fast and reliable method is beneficial. Plant disease identification from images captured by digital cameras is an area of active research. Use of various machine learning algorithms for plant disease classification and the evolution of deep convolutional neural network (CNN) based architectures have further enhanced the plant disease classification accuracy. In this context, an automated computer vision-based plant disease detection and classification scheme from plant and leaf’s photographs will be highly desirable. Although, there exist a few techniques currently used in an adhoc fashion for plant disease detection and/or classification, a systematic study to evaluate their usage and efficacy on actual plant data has largely remained unexplored.The aim of this paper is to evaluate various CNN based state-of-the-art transfer learning architectures like GoogLeNet, AlexNet, VGG16 and ResNet50V2 models for plant disease detection and classification. The models were tested on popular publicly available three plant disease benchmark database such as PlantVillage Dataset, New Plant Disease Dataset and Plant Pathology Dataset. Various validation metrics such as Precision, Recall, F1 score and overall accuracy were used to evaluate the final results of the experiments, which revealed that VGG16 rendered highest accuracy of 96.6%, 98.5% and 89% on the three dataset respectively, outperforming all other state-of-the-art models.","PeriodicalId":380005,"journal":{"name":"2022 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT)","volume":"14 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123084522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-08DOI: 10.1109/CONECCT55679.2022.9865849
Sumit Dhariwal, Avani Sharma
Sugarcane production is in increasing demand due to the interest in the sugar and alcohol industry, bioethanol and biomass production, as well as other manufacturing sectors. In particular, the constant scientific and technological advances have optimized agricultural activities and maximized the productivity of sugarcane crops. In this sense, digital image processing, computer vision techniques, and machine learning algorithms have supported automated processes that were previously performed manually and at a high cost. In this study, we present a novel method to detect crop rows and measure gaps in crop fields. Our method is also robust to deal with curved crop rows, which is a real problem and substantially limits numerous solutions in practical applications. The proposed method is evaluated using a database of real scene images that was prepared with the support of a small unmanned aerial vehicle (UAV). Experimental tests showed a low relative error of approximately 1.65% compared to manual mapping in the planting regions, even for regions with failures in the curved crop rows. It means that our proposal can identify and measure crop rows accurately, which enables automated inspections with high precision measurements.
{"title":"Aerial Images were used to Detect Curved-Crop Rows and Failures in Sugarcane Production","authors":"Sumit Dhariwal, Avani Sharma","doi":"10.1109/CONECCT55679.2022.9865849","DOIUrl":"https://doi.org/10.1109/CONECCT55679.2022.9865849","url":null,"abstract":"Sugarcane production is in increasing demand due to the interest in the sugar and alcohol industry, bioethanol and biomass production, as well as other manufacturing sectors. In particular, the constant scientific and technological advances have optimized agricultural activities and maximized the productivity of sugarcane crops. In this sense, digital image processing, computer vision techniques, and machine learning algorithms have supported automated processes that were previously performed manually and at a high cost. In this study, we present a novel method to detect crop rows and measure gaps in crop fields. Our method is also robust to deal with curved crop rows, which is a real problem and substantially limits numerous solutions in practical applications. The proposed method is evaluated using a database of real scene images that was prepared with the support of a small unmanned aerial vehicle (UAV). Experimental tests showed a low relative error of approximately 1.65% compared to manual mapping in the planting regions, even for regions with failures in the curved crop rows. It means that our proposal can identify and measure crop rows accurately, which enables automated inspections with high precision measurements.","PeriodicalId":380005,"journal":{"name":"2022 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126606648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-08DOI: 10.1109/CONECCT55679.2022.9865705
S. Atheeth, M. Arora
The paper reports simulation of piezoelectric disc and its extension to understand pMUT operation. The theoretical formulation matches the simulation results for piezo disc and pMUT at fundamental mode of vibration within 5% of each other. Stress formation in layers, central displacement, pressure output at 3mm distance for both PZT and AlN (Aluminium Nitride) as the piezo layer in pMUT are captured via the simulation model and compared.
{"title":"Simulation of Piezoelectric Disc & Micromachined Ultrasound Transducer","authors":"S. Atheeth, M. Arora","doi":"10.1109/CONECCT55679.2022.9865705","DOIUrl":"https://doi.org/10.1109/CONECCT55679.2022.9865705","url":null,"abstract":"The paper reports simulation of piezoelectric disc and its extension to understand pMUT operation. The theoretical formulation matches the simulation results for piezo disc and pMUT at fundamental mode of vibration within 5% of each other. Stress formation in layers, central displacement, pressure output at 3mm distance for both PZT and AlN (Aluminium Nitride) as the piezo layer in pMUT are captured via the simulation model and compared.","PeriodicalId":380005,"journal":{"name":"2022 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT)","volume":"85 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120886019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The state of art model for language translation, conversion from hand written to digital text, transcription are succeeded in wide range of fields using Natural Language Processing, Artificial Intelligence and Machine Learning (AIML) applications. In present, evaluation of subjective answers are not exercised systematically and graded using computer system. In this work, a mathematical method is proposed for evaluating subjective answers using Bidirectional Encoder Representation Transformers for word embedding and convert the sentence into vector space using pooling method for representing similar sentences. The proposed method evaluates the subjective answers having semantic meaning of answers based on topic Engineering and Medical related questions and answers dataset. It achieves to understand the similarity of different answers which are same semantically. The BERT model is used with machine learning methods to transform the sentence into vector space. The vector space is used to calculate percentage of similarity. The similarity of the sentences with percentage is observed and evaluated.
{"title":"An Approach to Evaluating Subjective Answers using BERT model","authors":"Potsangbam Sushila Devi, Sunita Sarkar, Takhellambam Sonamani Singh, Laimayum Dayal Sharma, Chongtham Pankaj, Khoirom Rajib Singh","doi":"10.1109/CONECCT55679.2022.9865706","DOIUrl":"https://doi.org/10.1109/CONECCT55679.2022.9865706","url":null,"abstract":"The state of art model for language translation, conversion from hand written to digital text, transcription are succeeded in wide range of fields using Natural Language Processing, Artificial Intelligence and Machine Learning (AIML) applications. In present, evaluation of subjective answers are not exercised systematically and graded using computer system. In this work, a mathematical method is proposed for evaluating subjective answers using Bidirectional Encoder Representation Transformers for word embedding and convert the sentence into vector space using pooling method for representing similar sentences. The proposed method evaluates the subjective answers having semantic meaning of answers based on topic Engineering and Medical related questions and answers dataset. It achieves to understand the similarity of different answers which are same semantically. The BERT model is used with machine learning methods to transform the sentence into vector space. The vector space is used to calculate percentage of similarity. The similarity of the sentences with percentage is observed and evaluated.","PeriodicalId":380005,"journal":{"name":"2022 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115254603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-08DOI: 10.1109/CONECCT55679.2022.9865807
Ena Jain, D. Acharya
Most Indian cities have seen rapid urbanization due to huge migration of population leading to a substantial rise in construction activities, vehicular emissions, and uncontrolled growth. Some such cities also house many pollutions causing industries that result in deterioration of air quality. These cities have pollution hotspots where pollution levels are much higher than permitted limits. Air pollution is highly location-centric and varies greatly on moving away from the hotspots. Because these Air Quality Index(AQI) data are typically unavailable, the long-term impact of these hotspots on adjacent neighborhoods is unknown. If the fluctuation in pollution in adjacent neighborhoods as we move away from hotspots can be modeled and projected, this information will be extremely beneficial for the government, and city administrations in better planning development activities as well as issuing suitable recommendations to sensitive establishments such as educational institutes, hospitals, and old age homes, among others. In this work, we have collected the real-time AQI data at the hotspot and its neighborhoods on a specific route over a period and tried to develop a mathematical model which forecasts the variation of AQI with distance.
{"title":"Mobile Sensing and Modeling Air Pollution Hotspots in Urban Neighborhoods","authors":"Ena Jain, D. Acharya","doi":"10.1109/CONECCT55679.2022.9865807","DOIUrl":"https://doi.org/10.1109/CONECCT55679.2022.9865807","url":null,"abstract":"Most Indian cities have seen rapid urbanization due to huge migration of population leading to a substantial rise in construction activities, vehicular emissions, and uncontrolled growth. Some such cities also house many pollutions causing industries that result in deterioration of air quality. These cities have pollution hotspots where pollution levels are much higher than permitted limits. Air pollution is highly location-centric and varies greatly on moving away from the hotspots. Because these Air Quality Index(AQI) data are typically unavailable, the long-term impact of these hotspots on adjacent neighborhoods is unknown. If the fluctuation in pollution in adjacent neighborhoods as we move away from hotspots can be modeled and projected, this information will be extremely beneficial for the government, and city administrations in better planning development activities as well as issuing suitable recommendations to sensitive establishments such as educational institutes, hospitals, and old age homes, among others. In this work, we have collected the real-time AQI data at the hotspot and its neighborhoods on a specific route over a period and tried to develop a mathematical model which forecasts the variation of AQI with distance.","PeriodicalId":380005,"journal":{"name":"2022 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124941615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-08DOI: 10.1109/CONECCT55679.2022.9865813
Swetha Kulkarni, S. Desai, Nirmala S. Patil, V. Baligar, M. M, N. R
Lung Cancer is one among the most perilous disease caused by various reasons with smoking being the common factor across the globe. Early detection is best for treating any type of cancer and this is very much true even with lung cancer. However, in Indian scenario, a patient approaching medical diagnosis at the early stage is quite rare. By the time first screening is done, cancer would have been grown to Grade 2 or higher level. Smoking and consuming tobacco products, as well as exposure to second-hand smoke are said to be major reason for this lung cancer. Classifying the given X-ray into cancerous and non-cancerous is challenging problem. Most of the literature’s reported so far have explored many deep neural network models for classifying the chest X-ray images in binary classification such as cancerous and non-cancerous. However, Chest X-rays are observed to have poor contrast in some cases, enhancing this contrast prior to training could be beneficial in terms of better accuracy of the model. Hence in this paper we present novel method of gamma corrected based CNN model for chest X-ray images classification. The proposed model has highest accuracy that is 0.92 and compared to other recently reported literature’s, our model is performing slightly better.
{"title":"Contrast Enhancement based CNN model for Lung Cancer Classification and Prediction using Chest X-ray Images","authors":"Swetha Kulkarni, S. Desai, Nirmala S. Patil, V. Baligar, M. M, N. R","doi":"10.1109/CONECCT55679.2022.9865813","DOIUrl":"https://doi.org/10.1109/CONECCT55679.2022.9865813","url":null,"abstract":"Lung Cancer is one among the most perilous disease caused by various reasons with smoking being the common factor across the globe. Early detection is best for treating any type of cancer and this is very much true even with lung cancer. However, in Indian scenario, a patient approaching medical diagnosis at the early stage is quite rare. By the time first screening is done, cancer would have been grown to Grade 2 or higher level. Smoking and consuming tobacco products, as well as exposure to second-hand smoke are said to be major reason for this lung cancer. Classifying the given X-ray into cancerous and non-cancerous is challenging problem. Most of the literature’s reported so far have explored many deep neural network models for classifying the chest X-ray images in binary classification such as cancerous and non-cancerous. However, Chest X-rays are observed to have poor contrast in some cases, enhancing this contrast prior to training could be beneficial in terms of better accuracy of the model. Hence in this paper we present novel method of gamma corrected based CNN model for chest X-ray images classification. The proposed model has highest accuracy that is 0.92 and compared to other recently reported literature’s, our model is performing slightly better.","PeriodicalId":380005,"journal":{"name":"2022 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124992874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-08DOI: 10.1109/CONECCT55679.2022.9865816
K. Srikanth, T. K. Ramesh, Suja Palaniswamy, Ranganathan Srinivasan
Artificial intelligence(AI) is used in decision support systems which learn and perceive features as a function of the number of layers and the weights computed during training. Due to their inherent black box nature, it is insufficient to consider accuracy, precision and recall as metrices for evaluating a model's performance. Domain knowledge is also essential to identify features that are significant by the model to arrive at its decision. In this paper, we consider a use case of face mask recognition to explain the application and benefits of XAI. Eight models used to solve the face mask recognition problem were selected. GradCAM Explainable AI (XAI) is used to explain the state-of-art models. Models that were selecting incorrect features were eliminated even though, they had a high accuracy. Domain knowledge relevant to face mask recognition viz., facial feature importance is applied to identify the model that picked the most appropriate features to arrive at the decision. We demonstrate that models with high accuracies need not be necessarily select the right features. In applications requiring rapid deployment, this method can act as a deciding factor in shortlisting models with a guarantee that the models are looking at the right features for arriving at the classification. Furthermore, the outcomes of the model can be explained to the user enhancing their confidence on the AI model being deployed in the field.
{"title":"XAI based model evaluation by applying domain knowledge","authors":"K. Srikanth, T. K. Ramesh, Suja Palaniswamy, Ranganathan Srinivasan","doi":"10.1109/CONECCT55679.2022.9865816","DOIUrl":"https://doi.org/10.1109/CONECCT55679.2022.9865816","url":null,"abstract":"Artificial intelligence(AI) is used in decision support systems which learn and perceive features as a function of the number of layers and the weights computed during training. Due to their inherent black box nature, it is insufficient to consider accuracy, precision and recall as metrices for evaluating a model's performance. Domain knowledge is also essential to identify features that are significant by the model to arrive at its decision. In this paper, we consider a use case of face mask recognition to explain the application and benefits of XAI. Eight models used to solve the face mask recognition problem were selected. GradCAM Explainable AI (XAI) is used to explain the state-of-art models. Models that were selecting incorrect features were eliminated even though, they had a high accuracy. Domain knowledge relevant to face mask recognition viz., facial feature importance is applied to identify the model that picked the most appropriate features to arrive at the decision. We demonstrate that models with high accuracies need not be necessarily select the right features. In applications requiring rapid deployment, this method can act as a deciding factor in shortlisting models with a guarantee that the models are looking at the right features for arriving at the classification. Furthermore, the outcomes of the model can be explained to the user enhancing their confidence on the AI model being deployed in the field.","PeriodicalId":380005,"journal":{"name":"2022 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121574558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}