首页 > 最新文献

2022 IEEE Fourth International Conference on Advances in Electronics, Computers and Communications (ICAECC)最新文献

英文 中文
Explaining deep-learning models using gradient-based localization for reliable tea-leaves classifications 解释使用基于梯度的定位进行可靠茶叶分类的深度学习模型
Puja Banerjee, Susmita Banerjee, R. P. Barnwal
In deep learning solutions there has been a lot of ambiguity about how to make explainability inclusive of a machine learning pipeline. Recently, several deep learning techniques have been introduced to solve increasingly complicated problems with higher predictive capacity. However, this predictive power comes at the cost of high computational complexity and difficult to interpret. While these models often produce very accurate predictions, we need to be able to explain the path followed by such models for decision making. Deep learning models, in general, predict with no or very less interpretable explanations. This lack of explainability makes such models blackbox. Explainable Artificial Intelligence (XAI) aims at transforming this black box approach into a more interpretable one. In this paper, we apply the well known Grad-CAM technique for the explainability of tea-leaf classification problem. The proposed method classifies tea-leaf-bud combinations using pre-trained deep learning models. We add classification explainability in our tea-leaf dataset using the pre-trained model as an input to the Grad-CAM technique to produce class-specific heatmap. We analyzed the results and working of the classification models for their reliability and effectiveness.
在深度学习解决方案中,关于如何使可解释性包括机器学习管道存在很多歧义。近年来,一些深度学习技术已经被引入,以更高的预测能力来解决日益复杂的问题。然而,这种预测能力是以高计算复杂度和难以解释为代价的。虽然这些模型经常产生非常准确的预测,但我们需要能够解释这些模型所遵循的决策路径。一般来说,深度学习模型的预测没有解释或解释得很少。由于缺乏可解释性,这类模型成了黑盒子。可解释的人工智能(XAI)旨在将这种黑盒方法转变为更可解释的方法。在本文中,我们应用了著名的Grad-CAM技术来解决茶叶分类问题的可解释性。提出的方法使用预训练的深度学习模型对茶叶-芽组合进行分类。我们使用预训练模型作为Grad-CAM技术的输入,在茶叶数据集中添加分类可解释性,以生成特定类别的热图。对分类模型的可靠性和有效性进行了分析。
{"title":"Explaining deep-learning models using gradient-based localization for reliable tea-leaves classifications","authors":"Puja Banerjee, Susmita Banerjee, R. P. Barnwal","doi":"10.1109/ICAECC54045.2022.9716699","DOIUrl":"https://doi.org/10.1109/ICAECC54045.2022.9716699","url":null,"abstract":"In deep learning solutions there has been a lot of ambiguity about how to make explainability inclusive of a machine learning pipeline. Recently, several deep learning techniques have been introduced to solve increasingly complicated problems with higher predictive capacity. However, this predictive power comes at the cost of high computational complexity and difficult to interpret. While these models often produce very accurate predictions, we need to be able to explain the path followed by such models for decision making. Deep learning models, in general, predict with no or very less interpretable explanations. This lack of explainability makes such models blackbox. Explainable Artificial Intelligence (XAI) aims at transforming this black box approach into a more interpretable one. In this paper, we apply the well known Grad-CAM technique for the explainability of tea-leaf classification problem. The proposed method classifies tea-leaf-bud combinations using pre-trained deep learning models. We add classification explainability in our tea-leaf dataset using the pre-trained model as an input to the Grad-CAM technique to produce class-specific heatmap. We analyzed the results and working of the classification models for their reliability and effectiveness.","PeriodicalId":199351,"journal":{"name":"2022 IEEE Fourth International Conference on Advances in Electronics, Computers and Communications (ICAECC)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124590838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Machine Learning Based Intrusion Detection 基于机器学习的入侵检测
Shivam Kejriwal, Devika Patadia, Saloni Dagli, Prachi Tawde
Intrusion refers to any malicious activity done in order to access confidential data. An intrusion detection system (IDS) detects these attacks and, on detection, it reports them to the administrator. It does so either by comparing the new activity with the past activities or by analyzing the network performance. This system forms a part of the vast security module and works with several other such sub-modules in order to make sure that these unwanted intrusions do not go unreported. The system that has been implemented in this paper is an anomaly-based Intrusion Detection System (IDS). The primary purpose of this implementation is to develop an efficient system in order to detect any external or internal unauthenticated activity. Several models have been experimented with in order to find one that suits the system the best and gives a good enough accuracy. The models that have been experimented with include Logistic Regressor, Random Forest Classifier, K Nearest Neighbor classifier, XGBoost Classifier, Gaussian Naive Bayes Classifier and a Multi-Layer Perceptron Classifier (MLP). Further, the accuracy of each of these models was calculated, and a comparative analysis was done between the performance of these models. The model that performed the best in this particular use case was the Random Forest Classifier giving an accuracy of 99.8% and a macro average F1-Score of 0.98.
入侵是指为了访问机密数据而进行的任何恶意活动。入侵检测系统(IDS)检测这些攻击,并将其报告给管理员。它通过将新活动与过去的活动进行比较或分析网络性能来实现这一点。该系统构成了庞大的安全模块的一部分,并与其他几个这样的子模块一起工作,以确保这些不必要的入侵不会不报告。本文所实现的系统是一个基于异常的入侵检测系统。此实现的主要目的是开发一个有效的系统,以便检测任何外部或内部未经身份验证的活动。为了找到一个最适合系统并提供足够好的精度的模型,已经试验了几个模型。已经实验的模型包括逻辑回归器、随机森林分类器、K近邻分类器、XGBoost分类器、高斯朴素贝叶斯分类器和多层感知器分类器(MLP)。进一步计算了各模型的精度,并对各模型的性能进行了对比分析。在这个特定的用例中,表现最好的模型是随机森林分类器,它的准确率为99.8%,宏观平均F1-Score为0.98。
{"title":"Machine Learning Based Intrusion Detection","authors":"Shivam Kejriwal, Devika Patadia, Saloni Dagli, Prachi Tawde","doi":"10.1109/ICAECC54045.2022.9716648","DOIUrl":"https://doi.org/10.1109/ICAECC54045.2022.9716648","url":null,"abstract":"Intrusion refers to any malicious activity done in order to access confidential data. An intrusion detection system (IDS) detects these attacks and, on detection, it reports them to the administrator. It does so either by comparing the new activity with the past activities or by analyzing the network performance. This system forms a part of the vast security module and works with several other such sub-modules in order to make sure that these unwanted intrusions do not go unreported. The system that has been implemented in this paper is an anomaly-based Intrusion Detection System (IDS). The primary purpose of this implementation is to develop an efficient system in order to detect any external or internal unauthenticated activity. Several models have been experimented with in order to find one that suits the system the best and gives a good enough accuracy. The models that have been experimented with include Logistic Regressor, Random Forest Classifier, K Nearest Neighbor classifier, XGBoost Classifier, Gaussian Naive Bayes Classifier and a Multi-Layer Perceptron Classifier (MLP). Further, the accuracy of each of these models was calculated, and a comparative analysis was done between the performance of these models. The model that performed the best in this particular use case was the Random Forest Classifier giving an accuracy of 99.8% and a macro average F1-Score of 0.98.","PeriodicalId":199351,"journal":{"name":"2022 IEEE Fourth International Conference on Advances in Electronics, Computers and Communications (ICAECC)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125998530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection of Skin Cancer Using Bi-Directional Emperical Mode Decomposition and GLCM 基于双向经验模态分解和GLCM的皮肤癌检测
J. J. Imaculate, T. Bobby
The most frequent type of cancer in humans is the skin cancer and it can be lethal. It affects in copious forms such as basal, melanoma, and squamous cell carcinoma. Among these, melanoma case is severe, most dangerous and unpredictable. When it is diagnosed in the early stages, it can be controlled and cured considerably. Thus, a novel computational approach using texture feature fusion and machine learning techniques is proposed to diagnose and classify the skin lesions as benign or malignant. The workflow of this approach is preprocessing for noise and hair strands removal, segmentation of the cancer affected region, validation of the segmentation methods, statistical feature extraction, principle feature selection, classification as benign or malignant and performance estimation of the classifier algorithm. The Otsu thresholding, enhanced Otsu thresholding and watershed segmentation methods are implemented and the segmented images are validated using the Jaccard index and Dice index. Further, several features derived from texture, colour, and shape of the segmented images are fused and fed to the variants of the Support Vector Machine (SVM) classifier after the significant features selection process and the performance of the classifiers are evaluated. The results show that cubic SVM classifier (98%, 100%, and 99%) and Fine Gaussian SVM classifier (100%, 100% and 100%) performs well in terms of sensitivity, specificity and accuracy for the considered image dataset. Hence, the proposed method can be used for early detection classification of melanoma.
人类最常见的癌症是皮肤癌,它可能是致命的。它影响多种形式,如基底、黑色素瘤和鳞状细胞癌。其中,黑色素瘤是最严重、最危险、最不可预测的病例。如果在早期阶段被诊断出来,它可以得到控制和治愈。因此,提出了一种使用纹理特征融合和机器学习技术的新型计算方法来诊断和分类皮肤病变的良性或恶性。该方法的工作流程包括噪声预处理、毛发去除预处理、肿瘤影响区域分割、分割方法验证、统计特征提取、原则特征选择、良性或恶性分类以及分类器算法的性能估计。实现了Otsu阈值分割、增强Otsu阈值分割和分水岭分割方法,并利用Jaccard指数和Dice指数对分割后的图像进行了验证。此外,在评估重要特征选择过程和分类器的性能后,从分割图像的纹理、颜色和形状中提取的几个特征被融合并馈送到支持向量机(SVM)分类器的变体中。结果表明,三次支持向量机分类器(98%、100%和99%)和细高斯支持向量机分类器(100%、100%和100%)在考虑的图像数据集的灵敏度、特异性和准确性方面表现良好。因此,该方法可用于黑色素瘤的早期检测分类。
{"title":"Detection of Skin Cancer Using Bi-Directional Emperical Mode Decomposition and GLCM","authors":"J. J. Imaculate, T. Bobby","doi":"10.1109/ICAECC54045.2022.9716668","DOIUrl":"https://doi.org/10.1109/ICAECC54045.2022.9716668","url":null,"abstract":"The most frequent type of cancer in humans is the skin cancer and it can be lethal. It affects in copious forms such as basal, melanoma, and squamous cell carcinoma. Among these, melanoma case is severe, most dangerous and unpredictable. When it is diagnosed in the early stages, it can be controlled and cured considerably. Thus, a novel computational approach using texture feature fusion and machine learning techniques is proposed to diagnose and classify the skin lesions as benign or malignant. The workflow of this approach is preprocessing for noise and hair strands removal, segmentation of the cancer affected region, validation of the segmentation methods, statistical feature extraction, principle feature selection, classification as benign or malignant and performance estimation of the classifier algorithm. The Otsu thresholding, enhanced Otsu thresholding and watershed segmentation methods are implemented and the segmented images are validated using the Jaccard index and Dice index. Further, several features derived from texture, colour, and shape of the segmented images are fused and fed to the variants of the Support Vector Machine (SVM) classifier after the significant features selection process and the performance of the classifiers are evaluated. The results show that cubic SVM classifier (98%, 100%, and 99%) and Fine Gaussian SVM classifier (100%, 100% and 100%) performs well in terms of sensitivity, specificity and accuracy for the considered image dataset. Hence, the proposed method can be used for early detection classification of melanoma.","PeriodicalId":199351,"journal":{"name":"2022 IEEE Fourth International Conference on Advances in Electronics, Computers and Communications (ICAECC)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126784886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Leaf Classification for Plant Recognition Using EfficientNet Architecture 基于effentnet结构的植物叶片分类
Yagan Arun, G. S. Viknesh
Automatic plant species classification has always been a great challenge. Classical machine learning methods have been used to classify leaves using handcrafted features from the morphology of plant leaves which has given promising results. However, we focus on using non-handcrafted features of plant leaves for classification. So, to achieve it, we utilize a deep learning approach for feature extraction and classification of features. Recently Deep Convolution Neural Networks have shown remarkable results in image classification and object detection-based problems. With the help of the transfer learning approach, we explore and compare a set of pre-trained networks and define the best classifier. That set consists of eleven different pre-trained networks loaded with ImageNet weights: AlexNet, EfficientNet BO to B7, ResNet50, and Xception. These models are trained on the plant leaf image data set, consisting of leaf images from eleven different unique plant species. It was found that EfficientNet-B5 performed better in classifying leaf images compared to other pre-trained models. Automatic plant species classification could be helpful for food engineers, people related to agriculture, researchers, and ordinary people.
植物物种自动分类一直是一个巨大的挑战。经典的机器学习方法已经被用于从植物叶子的形态中使用手工制作的特征来分类叶子,并给出了有希望的结果。然而,我们的重点是利用植物叶片的非手工特征进行分类。因此,为了实现这一目标,我们利用深度学习方法进行特征提取和特征分类。近年来,深度卷积神经网络在图像分类和基于目标检测的问题上取得了显著的成果。在迁移学习方法的帮助下,我们探索和比较了一组预训练的网络,并定义了最佳分类器。该集合由11个不同的加载了ImageNet权重的预训练网络组成:AlexNet、EfficientNet BO到B7、ResNet50和Xception。这些模型在植物叶片图像数据集上进行训练,该数据集由来自11种不同独特植物物种的叶片图像组成。结果表明,与其他预训练模型相比,效率网络- b5在树叶图像分类方面表现更好。植物物种自动分类对食品工程师、农业相关人员、研究人员和普通民众都有帮助。
{"title":"Leaf Classification for Plant Recognition Using EfficientNet Architecture","authors":"Yagan Arun, G. S. Viknesh","doi":"10.1109/ICAECC54045.2022.9716637","DOIUrl":"https://doi.org/10.1109/ICAECC54045.2022.9716637","url":null,"abstract":"Automatic plant species classification has always been a great challenge. Classical machine learning methods have been used to classify leaves using handcrafted features from the morphology of plant leaves which has given promising results. However, we focus on using non-handcrafted features of plant leaves for classification. So, to achieve it, we utilize a deep learning approach for feature extraction and classification of features. Recently Deep Convolution Neural Networks have shown remarkable results in image classification and object detection-based problems. With the help of the transfer learning approach, we explore and compare a set of pre-trained networks and define the best classifier. That set consists of eleven different pre-trained networks loaded with ImageNet weights: AlexNet, EfficientNet BO to B7, ResNet50, and Xception. These models are trained on the plant leaf image data set, consisting of leaf images from eleven different unique plant species. It was found that EfficientNet-B5 performed better in classifying leaf images compared to other pre-trained models. Automatic plant species classification could be helpful for food engineers, people related to agriculture, researchers, and ordinary people.","PeriodicalId":199351,"journal":{"name":"2022 IEEE Fourth International Conference on Advances in Electronics, Computers and Communications (ICAECC)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124818088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Classification of Nutrient Deficiencies in Plants Using Recurrent Neural Network 利用递归神经网络对植物养分缺乏进行分类
S. Ramasamy, V. Chandrasekar, A. M. Viswa Bharathy
The symptoms associated with deficiencies in plants tends to appear often on the leaves. The color and shape of a leaf often used for diagnosing the nutritional deficiencies in plants and classification of these properties often pose serious problem. Since same color and shape may have many root cause problems. It is hence necessary to carefully analyze the texture of leaf with proper training of a classifier. In this paper, we design an acquisition-based classification model that utilizes Internet of Things (IoTs) for data acquisition and recurrent neural networks (RNN) for the task of classification. Prior classification, the model is trained over several iteration based on careful observation of features and its related symptoms. The simulation is conducted with fine-tuning of classification after several iterations. The results of simulation show that the proposed method obtains improved classification accuracy in terms of accuracy and F-measure than other deep learning models.
植物中与缺陷有关的症状往往出现在叶子上。叶子的颜色和形状经常被用来诊断植物的营养缺乏,而这些特性的分类往往会带来严重的问题。由于相同的颜色和形状可能有许多根源问题。因此,有必要通过适当的分类器训练来仔细分析叶片的纹理。在本文中,我们设计了一个基于采集的分类模型,该模型利用物联网(iot)进行数据采集,并利用递归神经网络(RNN)进行分类。在先验分类中,基于仔细观察特征及其相关症状,对模型进行多次迭代训练。经过多次迭代,对分类进行了微调。仿真结果表明,与其他深度学习模型相比,该方法在准确率和F-measure两方面都取得了更高的分类精度。
{"title":"Classification of Nutrient Deficiencies in Plants Using Recurrent Neural Network","authors":"S. Ramasamy, V. Chandrasekar, A. M. Viswa Bharathy","doi":"10.1109/ICAECC54045.2022.9716641","DOIUrl":"https://doi.org/10.1109/ICAECC54045.2022.9716641","url":null,"abstract":"The symptoms associated with deficiencies in plants tends to appear often on the leaves. The color and shape of a leaf often used for diagnosing the nutritional deficiencies in plants and classification of these properties often pose serious problem. Since same color and shape may have many root cause problems. It is hence necessary to carefully analyze the texture of leaf with proper training of a classifier. In this paper, we design an acquisition-based classification model that utilizes Internet of Things (IoTs) for data acquisition and recurrent neural networks (RNN) for the task of classification. Prior classification, the model is trained over several iteration based on careful observation of features and its related symptoms. The simulation is conducted with fine-tuning of classification after several iterations. The results of simulation show that the proposed method obtains improved classification accuracy in terms of accuracy and F-measure than other deep learning models.","PeriodicalId":199351,"journal":{"name":"2022 IEEE Fourth International Conference on Advances in Electronics, Computers and Communications (ICAECC)","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116325042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
[Copyright notice] (版权)
{"title":"[Copyright notice]","authors":"","doi":"10.1109/icaecc54045.2022.9716701","DOIUrl":"https://doi.org/10.1109/icaecc54045.2022.9716701","url":null,"abstract":"","PeriodicalId":199351,"journal":{"name":"2022 IEEE Fourth International Conference on Advances in Electronics, Computers and Communications (ICAECC)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116183680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeepAttentiveNet: An automated deep based method for COVID-19 diagnosis based on chest x-rays DeepAttentiveNet:一种基于胸部x射线的COVID-19自动深度诊断方法
Ashima Yadav, Debajyoti Mukhopadhyay
The recent outbreak of coronavirus has impacted the whole world. The infectious respiratory disease has killed millions of people all over the world. The process of detecting the disease through RT-PCR and other tests is very time-consuming, and testing kits are not widely available. Chest x-rays and chest CT scans are also very effective techniques for diagnosing respiratory diseases. This paper proposes a DeepAttentiveNet, a deep-based architecture that applies the pre-trained CNN-based architecture DenseNet to extract the spatial features from the images. This is followed by the attention mechanism, which focuses on the information-rich region on the images, thus enhancing the overall classification process. The performance of our model is analyzed on the COVID 19 Radiography dataset, which contains 21,000 x-ray images corresponding to different respiratory infections like COVID 19, lung opacity, and viral pneumonia. Hence our model can categorize the x-rays with a 97.1% F1 score and 97.5% accuracy. We have also compared our architecture with other popular CNN-based models and baseline methods to demonstrate the superior performance of the model.
最近新冠肺炎疫情波及全球。这种传染性呼吸系统疾病已经导致全世界数百万人死亡。通过RT-PCR和其他检测方法检测该疾病的过程非常耗时,而且检测试剂盒也不普遍。胸部x光和胸部CT扫描也是诊断呼吸系统疾病非常有效的技术。本文提出了一种DeepAttentiveNet,这是一种基于深度的架构,它应用预训练的基于cnn的架构DenseNet从图像中提取空间特征。接下来是注意力机制,它将注意力集中在图像上信息丰富的区域,从而增强了整个分类过程。我们的模型在COVID - 19放射学数据集上进行了性能分析,该数据集包含21,000张x射线图像,对应于不同的呼吸道感染,如COVID - 19,肺不透明和病毒性肺炎。因此,我们的模型可以对x射线进行分类,F1得分为97.1%,准确率为97.5%。我们还将我们的架构与其他流行的基于cnn的模型和基线方法进行了比较,以证明该模型的优越性能。
{"title":"DeepAttentiveNet: An automated deep based method for COVID-19 diagnosis based on chest x-rays","authors":"Ashima Yadav, Debajyoti Mukhopadhyay","doi":"10.1109/ICAECC54045.2022.9716640","DOIUrl":"https://doi.org/10.1109/ICAECC54045.2022.9716640","url":null,"abstract":"The recent outbreak of coronavirus has impacted the whole world. The infectious respiratory disease has killed millions of people all over the world. The process of detecting the disease through RT-PCR and other tests is very time-consuming, and testing kits are not widely available. Chest x-rays and chest CT scans are also very effective techniques for diagnosing respiratory diseases. This paper proposes a DeepAttentiveNet, a deep-based architecture that applies the pre-trained CNN-based architecture DenseNet to extract the spatial features from the images. This is followed by the attention mechanism, which focuses on the information-rich region on the images, thus enhancing the overall classification process. The performance of our model is analyzed on the COVID 19 Radiography dataset, which contains 21,000 x-ray images corresponding to different respiratory infections like COVID 19, lung opacity, and viral pneumonia. Hence our model can categorize the x-rays with a 97.1% F1 score and 97.5% accuracy. We have also compared our architecture with other popular CNN-based models and baseline methods to demonstrate the superior performance of the model.","PeriodicalId":199351,"journal":{"name":"2022 IEEE Fourth International Conference on Advances in Electronics, Computers and Communications (ICAECC)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116783344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Handwritten Text Recognition from an Image with Android Application 用Android应用程序从图像中识别手写文本
Hanumant Mule, Namrata Kadam, D. Naik
Nowadays, Storing information from handwritten documents for future use is becoming necessary. An easy way to store information is to capture handwritten documents and save them in image format. Recognizing the text or characters present in the image is called Optical Character Recognition. Text extraction from the image in the recent research is challenging due to stroke variation, inconsistent writing style, Cursive handwriting, etc. We have proposed CNN and BiLSTM models for text recognition in this work. This model is evaluated on the IAM dataset and achieved 92% character recognition accuracy. This model is deployed to the Firebase as a custom model to increase usability. We have developed an android application that will allow the user to capture or browse the image and extract the text from the picture by calling the firebase model and saving text in the file. To store the text file user can browse for the appropriate location. The proposed model works on both printed and handwritten text.
如今,从手写文档中存储信息以备将来使用变得很有必要。存储信息的一种简单方法是捕获手写文档并将其保存为图像格式。识别图像中的文本或字符称为光学字符识别。在最近的研究中,由于笔画变化、书写风格不一致、草书等原因,从图像中提取文本具有挑战性。在这项工作中,我们提出了CNN和BiLSTM模型用于文本识别。该模型在IAM数据集上进行了评估,达到了92%的字符识别准确率。此模型作为自定义模型部署到Firebase,以提高可用性。我们开发了一个android应用程序,允许用户捕获或浏览图像,并通过调用firebase模型从图片中提取文本,并将文本保存在文件中。要存储文本文件,用户可以浏览合适的位置。该模型适用于印刷文本和手写文本。
{"title":"Handwritten Text Recognition from an Image with Android Application","authors":"Hanumant Mule, Namrata Kadam, D. Naik","doi":"10.1109/ICAECC54045.2022.9716714","DOIUrl":"https://doi.org/10.1109/ICAECC54045.2022.9716714","url":null,"abstract":"Nowadays, Storing information from handwritten documents for future use is becoming necessary. An easy way to store information is to capture handwritten documents and save them in image format. Recognizing the text or characters present in the image is called Optical Character Recognition. Text extraction from the image in the recent research is challenging due to stroke variation, inconsistent writing style, Cursive handwriting, etc. We have proposed CNN and BiLSTM models for text recognition in this work. This model is evaluated on the IAM dataset and achieved 92% character recognition accuracy. This model is deployed to the Firebase as a custom model to increase usability. We have developed an android application that will allow the user to capture or browse the image and extract the text from the picture by calling the firebase model and saving text in the file. To store the text file user can browse for the appropriate location. The proposed model works on both printed and handwritten text.","PeriodicalId":199351,"journal":{"name":"2022 IEEE Fourth International Conference on Advances in Electronics, Computers and Communications (ICAECC)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123772946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Frequency Control and small signal stability Improvement with Fuzzy control based Single and two area power systems 基于模糊控制的单、两区电力系统频率控制及小信号稳定性改进
Venu Yarlagadda, G. Lakshminarayana, M. Nagajyothi, I. Neelima
Modern Power systems are designed for the fine tuning of frequency and less tolerance for system frequency deviation from nominal value. The Power System is dynamically subjected to the small perturbations of load leading to non-oscillatory Instability due to insufficient damping. The article entente the single area and two area load frequency control and small signal stability analysis. It dispenses the simulation of single area and two area systems with small perturbations of load, with three cases for both kinds of power systems. Case1 without any controller, case2 with PI controller and case3 with Fuzzy Controllers. The simulation is carried out for both single area and two area power systems with all three cases. In the first part of the case study, the simulation results of single area power system for all three cases have been presented. In the second part, the simulation results of two area power system for all three cases have been presented. The simulation results demonstrate the effectiveness of Fuzzy Control perpetuates the frequency with in the endurable range of frequency and subsequently it ensures the small signal stability of both the Power systems against load disturbances.
现代电力系统是为频率的微调而设计的,对系统频率偏离标称值的容忍度较小。电力系统动态地受到负荷的微小扰动,由于阻尼不足而导致非振荡性失稳。本文进行了单区和双区负荷频率控制和小信号稳定性分析。省去了负荷小扰动的单区系统和双区系统的仿真,两种电力系统分别有三种情况。Case1不带任何控制器,case2带PI控制器,case3带模糊控制器。分别对单区和双区电力系统进行了仿真。在案例研究的第一部分,给出了三种情况下的单区域电力系统的仿真结果。第二部分给出了三种情况下两区电力系统的仿真结果。仿真结果表明,模糊控制能有效地使频率保持在可承受的频率范围内,从而保证了电力系统对负荷扰动的小信号稳定性。
{"title":"Frequency Control and small signal stability Improvement with Fuzzy control based Single and two area power systems","authors":"Venu Yarlagadda, G. Lakshminarayana, M. Nagajyothi, I. Neelima","doi":"10.1109/ICAECC54045.2022.9716680","DOIUrl":"https://doi.org/10.1109/ICAECC54045.2022.9716680","url":null,"abstract":"Modern Power systems are designed for the fine tuning of frequency and less tolerance for system frequency deviation from nominal value. The Power System is dynamically subjected to the small perturbations of load leading to non-oscillatory Instability due to insufficient damping. The article entente the single area and two area load frequency control and small signal stability analysis. It dispenses the simulation of single area and two area systems with small perturbations of load, with three cases for both kinds of power systems. Case1 without any controller, case2 with PI controller and case3 with Fuzzy Controllers. The simulation is carried out for both single area and two area power systems with all three cases. In the first part of the case study, the simulation results of single area power system for all three cases have been presented. In the second part, the simulation results of two area power system for all three cases have been presented. The simulation results demonstrate the effectiveness of Fuzzy Control perpetuates the frequency with in the endurable range of frequency and subsequently it ensures the small signal stability of both the Power systems against load disturbances.","PeriodicalId":199351,"journal":{"name":"2022 IEEE Fourth International Conference on Advances in Electronics, Computers and Communications (ICAECC)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127427682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Highly Accurate Static Hand Gesture Recognition Model Using Deep Convolutional Neural Network for Human Machine Interaction 基于深度卷积神经网络的人机交互高精度静态手势识别模型
U. S. Babu, A. Raganna, K.N. Vidyasagar, S. Bharati, Gautam Kumar
In this work, we propose a deep convolutional neural network (DCNN) based model for static hand gestures recognition. Static hand gesture images corresponding to five different classes are presented to DCNN model without any preprocessing. The model has achieved a train and test accuracy of 97.9% and 99.6% respectively which is one of the best ever reported accuracy in static hand gesture recognition applications. It is also found that the performance of the model is good even with complex backgrounds and poor lighting conditions. Due to its accuracy and robustness, this model can be implemented in applications such as human machine interaction and autonomous cars.
在这项工作中,我们提出了一个基于深度卷积神经网络(DCNN)的静态手势识别模型。将5个不同类别的静态手势图像未经预处理地提供给DCNN模型。该模型的训练准确率和测试准确率分别达到97.9%和99.6%,是静态手势识别应用中准确率最高的研究之一。同时发现,即使在复杂的背景和较差的光照条件下,该模型的表现也很好。由于其准确性和鲁棒性,该模型可以在人机交互和自动驾驶汽车等应用中实现。
{"title":"Highly Accurate Static Hand Gesture Recognition Model Using Deep Convolutional Neural Network for Human Machine Interaction","authors":"U. S. Babu, A. Raganna, K.N. Vidyasagar, S. Bharati, Gautam Kumar","doi":"10.1109/ICAECC54045.2022.9716619","DOIUrl":"https://doi.org/10.1109/ICAECC54045.2022.9716619","url":null,"abstract":"In this work, we propose a deep convolutional neural network (DCNN) based model for static hand gestures recognition. Static hand gesture images corresponding to five different classes are presented to DCNN model without any preprocessing. The model has achieved a train and test accuracy of 97.9% and 99.6% respectively which is one of the best ever reported accuracy in static hand gesture recognition applications. It is also found that the performance of the model is good even with complex backgrounds and poor lighting conditions. Due to its accuracy and robustness, this model can be implemented in applications such as human machine interaction and autonomous cars.","PeriodicalId":199351,"journal":{"name":"2022 IEEE Fourth International Conference on Advances in Electronics, Computers and Communications (ICAECC)","volume":"188 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114392576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2022 IEEE Fourth International Conference on Advances in Electronics, Computers and Communications (ICAECC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1