Pub Date : 2022-12-01DOI: 10.1109/ICECA55336.2022.10009304
Sandra Johnson, Lourdu Jennifer J R, G. Karthikeyan, Vengadapathiraj M, D. Sasireka
Detection of diseases, including diabetic retinopathy, may be greatly improved by taking a fundus picture of the back of the eye (DR). Complications in diabetics are the most common cause of vision problems, notably in younger and much more financially secure age groups. The risk of blindness in patients with DR may be reduced if they are diagnosed early enough. An ophthalmologist examined the fundus picture and used DR screening to look for lesions. However, the increase in incidence of DR is not correlated with the number of ophthalmologists who are able to interpret fundus pictures. Delay in prevention and treatment of DR may result as a result of this. Consequently, an automated diagnosis system is required to assist ophthalmologists in increasing the diagnostic process efficiency. The concatenate model is used in this study to differ fundus images into three categories: those without diabetic retinopathy, those with non-proliferative diabetic retinopathy, and those with proliferative diabetic retinopathy. We're using DenseNet121 and Inception-ResNetV2 for our models. Two models' feature extraction findings are integrated using the multilayer perceptron (MLP) classification approach. Compared to a single model, our strategy provides an increase in accuracy, precision, and recall of 91 percent and 90 percent for the F1-score. Deep-learning-based DR categorization utilizing fundus picture data was successfully shown in this experiment.
{"title":"An Ensemble Deep Learning Approach for Diabetic Retinopathy Detection using Fundus Image","authors":"Sandra Johnson, Lourdu Jennifer J R, G. Karthikeyan, Vengadapathiraj M, D. Sasireka","doi":"10.1109/ICECA55336.2022.10009304","DOIUrl":"https://doi.org/10.1109/ICECA55336.2022.10009304","url":null,"abstract":"Detection of diseases, including diabetic retinopathy, may be greatly improved by taking a fundus picture of the back of the eye (DR). Complications in diabetics are the most common cause of vision problems, notably in younger and much more financially secure age groups. The risk of blindness in patients with DR may be reduced if they are diagnosed early enough. An ophthalmologist examined the fundus picture and used DR screening to look for lesions. However, the increase in incidence of DR is not correlated with the number of ophthalmologists who are able to interpret fundus pictures. Delay in prevention and treatment of DR may result as a result of this. Consequently, an automated diagnosis system is required to assist ophthalmologists in increasing the diagnostic process efficiency. The concatenate model is used in this study to differ fundus images into three categories: those without diabetic retinopathy, those with non-proliferative diabetic retinopathy, and those with proliferative diabetic retinopathy. We're using DenseNet121 and Inception-ResNetV2 for our models. Two models' feature extraction findings are integrated using the multilayer perceptron (MLP) classification approach. Compared to a single model, our strategy provides an increase in accuracy, precision, and recall of 91 percent and 90 percent for the F1-score. Deep-learning-based DR categorization utilizing fundus picture data was successfully shown in this experiment.","PeriodicalId":356949,"journal":{"name":"2022 6th International Conference on Electronics, Communication and Aerospace Technology","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126306208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/ICECA55336.2022.10009483
N. Deepa, R. Sumathi
Deep learning models have extended its application in computer aided diagnosis of various medical complications. Identification of tumors from the images obtained from Magnetic Resonance Imaging (MRI) is one among them. But, in certain situations where the availability of dataset, in specific, the number of observations in a particular class, is very low than the other class, techniques such as one-class classification has to be incurred. This work combines the concept of transfer learning and one-class classification. The best pre-trained CNN which is capable of classifying the MRI images with tumors and without tumors is identified and is used for feature extraction. The features are extracted from a dataset with 465 positive images and 46 negative images. The extracted features are given as input to the one-class classifiers. The pre-trained models compared are VGG19, Resnet50 and Densenet121. VGG19 shows the best performance and hence used for feature extraction. The one-class classifiers compared are one-class support vector machine and isolation forest. One-class support vector machine performs better than the isolation forest algorithm.
{"title":"Transfer Learning and One Class Classification - A Combined Approach for Tumor Classification","authors":"N. Deepa, R. Sumathi","doi":"10.1109/ICECA55336.2022.10009483","DOIUrl":"https://doi.org/10.1109/ICECA55336.2022.10009483","url":null,"abstract":"Deep learning models have extended its application in computer aided diagnosis of various medical complications. Identification of tumors from the images obtained from Magnetic Resonance Imaging (MRI) is one among them. But, in certain situations where the availability of dataset, in specific, the number of observations in a particular class, is very low than the other class, techniques such as one-class classification has to be incurred. This work combines the concept of transfer learning and one-class classification. The best pre-trained CNN which is capable of classifying the MRI images with tumors and without tumors is identified and is used for feature extraction. The features are extracted from a dataset with 465 positive images and 46 negative images. The extracted features are given as input to the one-class classifiers. The pre-trained models compared are VGG19, Resnet50 and Densenet121. VGG19 shows the best performance and hence used for feature extraction. The one-class classifiers compared are one-class support vector machine and isolation forest. One-class support vector machine performs better than the isolation forest algorithm.","PeriodicalId":356949,"journal":{"name":"2022 6th International Conference on Electronics, Communication and Aerospace Technology","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122280008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/ICECA55336.2022.10009515
P. Loganathan, R. Sathish, S. Palanivel, P. Selvam, R. Devarajan, Vinod Kumar
In an effort to preserve our planet, alternative energy sources are gaining popularity. In order to address pollution head-on, the authors of this research suggest a hybrid electric vehicle (HEV) system. The most popular forms of renewable energy are wind and solar. These days, the internal combustion engine of a hybrid (solar/wind) electric vehicle (HEVS) is paired with one or more electric motors that draw power from batteries. Plugging a HEV into an external power source is not an option for recharging the battery. A combination of regenerative braking and the internal combustion engine provides the power needed to charge the car. As a result, the suggested system has the potential to lessen reliance on fossil fuels, lower pollution levels, and open the door to the use of renewable energy for transportation. The DC-DC converter receives input power from both sources. A direct current (dc) generator is used in windmills to directly transform mechanical energy into electricity. A SEPIC is used to simulate a DC-DC buck-boost converter, allowing the output voltage to be set precisely. MPPT based on INC is used to regulate the duty ratio. A battery in the system stores the combined energy output from the two generators. PIC microcontroller platform is used to implement the suggested system and ensure its performance.
{"title":"Solar and Wind Integration of Electric Vehicles using SEPIC Fused Converter","authors":"P. Loganathan, R. Sathish, S. Palanivel, P. Selvam, R. Devarajan, Vinod Kumar","doi":"10.1109/ICECA55336.2022.10009515","DOIUrl":"https://doi.org/10.1109/ICECA55336.2022.10009515","url":null,"abstract":"In an effort to preserve our planet, alternative energy sources are gaining popularity. In order to address pollution head-on, the authors of this research suggest a hybrid electric vehicle (HEV) system. The most popular forms of renewable energy are wind and solar. These days, the internal combustion engine of a hybrid (solar/wind) electric vehicle (HEVS) is paired with one or more electric motors that draw power from batteries. Plugging a HEV into an external power source is not an option for recharging the battery. A combination of regenerative braking and the internal combustion engine provides the power needed to charge the car. As a result, the suggested system has the potential to lessen reliance on fossil fuels, lower pollution levels, and open the door to the use of renewable energy for transportation. The DC-DC converter receives input power from both sources. A direct current (dc) generator is used in windmills to directly transform mechanical energy into electricity. A SEPIC is used to simulate a DC-DC buck-boost converter, allowing the output voltage to be set precisely. MPPT based on INC is used to regulate the duty ratio. A battery in the system stores the combined energy output from the two generators. PIC microcontroller platform is used to implement the suggested system and ensure its performance.","PeriodicalId":356949,"journal":{"name":"2022 6th International Conference on Electronics, Communication and Aerospace Technology","volume":"270 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115955242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/ICECA55336.2022.10009547
N. Sharma, Avinash Sharma, Sheifali Gupta
The term “gastrointestinal tract” refers to the digestive system that receives food, breaks it down, absorbs its nutrients, and then expels it as waste.The Gastrointestinal (GI) tract has a significant role in the global burden of cancer-related mortality. According to the Global Cancer Statistic 2020 figures, GI tract cancers are the main reason for cancer-related mortality and provide a substantial challenge to the rising life expectancy. Investigating and identifying GI tract anomalies need a thorough examination of the GI tract. So there is a need for a method by which these anomalies can be detected at an early stage. In this article, a comprehensive study of the research done in the area of the GI tract based on machine learning and deep learning techniques has been presented. The analysis of GI is divided into classification and segmentation. The paper covers all the techniques for classification and segmentation used in the previous years on different datasets.
{"title":"A Comprehensive Review for Classification and Segmentation of Gastro Intestine Tract","authors":"N. Sharma, Avinash Sharma, Sheifali Gupta","doi":"10.1109/ICECA55336.2022.10009547","DOIUrl":"https://doi.org/10.1109/ICECA55336.2022.10009547","url":null,"abstract":"The term “gastrointestinal tract” refers to the digestive system that receives food, breaks it down, absorbs its nutrients, and then expels it as waste.The Gastrointestinal (GI) tract has a significant role in the global burden of cancer-related mortality. According to the Global Cancer Statistic 2020 figures, GI tract cancers are the main reason for cancer-related mortality and provide a substantial challenge to the rising life expectancy. Investigating and identifying GI tract anomalies need a thorough examination of the GI tract. So there is a need for a method by which these anomalies can be detected at an early stage. In this article, a comprehensive study of the research done in the area of the GI tract based on machine learning and deep learning techniques has been presented. The analysis of GI is divided into classification and segmentation. The paper covers all the techniques for classification and segmentation used in the previous years on different datasets.","PeriodicalId":356949,"journal":{"name":"2022 6th International Conference on Electronics, Communication and Aerospace Technology","volume":"490 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116537601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/ICECA55336.2022.10009271
Kiran Mannem, Pasumarthy Nageswara Rao, S. M. Reddy
Currently, the Long-Term Evolution Advanced Network (LTE-AN) has a number of benefits, including fast speed, high data rate, and low latency, but it also has significant drawbacks, including seamless connectivity and resource management. To solve these issues, an efficient handover scheme is to be presented. So, in this paper an Optimal Hand-Over scheme based on Multi-Objective Artificial Flora (OHO-MOAF) algorithm is proposed. Initially, Hand Over (HO) parameters of each evolved Node B (eNB) or base station (BS) are calculated. Then these parameters are utilized as objective functions in the proposed algorithm. Based on this algorithm, the target eNB is selected optimally. The simulation results show that the OHO-MOAF scheme outperforms the existing HO technique in terms of call blocking and call dropping with HO failure.
{"title":"Multi-Objective Artificial Flora Algorithm Based Optimal Handover Scheme for LTE-Advanced Networks","authors":"Kiran Mannem, Pasumarthy Nageswara Rao, S. M. Reddy","doi":"10.1109/ICECA55336.2022.10009271","DOIUrl":"https://doi.org/10.1109/ICECA55336.2022.10009271","url":null,"abstract":"Currently, the Long-Term Evolution Advanced Network (LTE-AN) has a number of benefits, including fast speed, high data rate, and low latency, but it also has significant drawbacks, including seamless connectivity and resource management. To solve these issues, an efficient handover scheme is to be presented. So, in this paper an Optimal Hand-Over scheme based on Multi-Objective Artificial Flora (OHO-MOAF) algorithm is proposed. Initially, Hand Over (HO) parameters of each evolved Node B (eNB) or base station (BS) are calculated. Then these parameters are utilized as objective functions in the proposed algorithm. Based on this algorithm, the target eNB is selected optimally. The simulation results show that the OHO-MOAF scheme outperforms the existing HO technique in terms of call blocking and call dropping with HO failure.","PeriodicalId":356949,"journal":{"name":"2022 6th International Conference on Electronics, Communication and Aerospace Technology","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113932876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/ICECA55336.2022.10009177
R. S. Prabhu, A. Prema, E. Perumal
Cloud computing is a recent technology that allows users to create services on-demand. Cloud computing has achieved benefits as a result of its self-service capability as well as on demand services. This offers significant adaptability to its users, as they simply pay for the services they require, rather than worrying about the expense of equipment or software support. The major benefit of utilizing the cloud -based environment in organization is to enhance the data maintenance scheme in an easy way as well as improve the integrity of service to avoid manual flaws over maintenance. However, the remote cloud based data maintenance and evaluation leads certain security related threats, especially with Distributed Denial of Service (DDoS) Attacks. These attacks are caused by attempts of intruders or hackers to hack the data present in the server end or traverse between client and server end. The attacker obtains the data and modifies it according to their convenience without the knowledge of the data owner. These kinds of attacks are most dangerous, and the confidentiality of the data is totally disturbed due to such threats. This paper is intended to design a novel deep learning strategy called Modified Learning based Cloud Attack Detection (MLCAD), in which it adapts the features from the conventional security handling scheme called Intelligent Attack Identification Strategy (IAIS). This proposed MLCAD approach identifies the DDoS attacks over cloud environment by means of analyzing the authorization and authentication logics of the respective user, examining the Internet Protocol (IP) Address mentioned in the relevant request as well as the metadata acquired from the user end. These provisions have made the proposed approach MLCAD to act better to identify the DDoS attack in an efficient manner with full significance. The paper provides the proper graphical proofs to prove the integrity and performance of the proposed approach in a clear manner.
云计算是一种允许用户按需创建服务的新技术。云计算由于其自助服务能力和按需服务而获得了好处。这为用户提供了显著的适应性,因为他们只需为所需的服务付费,而不必担心设备或软件支持的费用。在组织中利用基于云的环境的主要好处是以一种简单的方式增强数据维护方案,并提高服务的完整性,以避免人工维护的缺陷。然而,基于云的远程数据维护和评估也带来了一些安全威胁,尤其是DDoS (Distributed Denial of Service)攻击。这些攻击是由入侵者或黑客试图破解服务器端中存在的数据或在客户端和服务器端之间遍历数据引起的。攻击者在数据所有者不知情的情况下获取数据并根据自己的方便进行修改。这种类型的攻击是最危险的,并且由于这种威胁,数据的机密性完全受到干扰。本文旨在设计一种新的深度学习策略,称为基于改进学习的云攻击检测(MLCAD),该策略适应了传统安全处理方案智能攻击识别策略(IAIS)的特征。本文提出的MLCAD方法通过分析用户的授权和认证逻辑、检查相关请求中提到的IP地址以及从用户端获取的元数据来识别云环境下的DDoS攻击。这些规定使得MLCAD所提出的方法能够更好地有效识别DDoS攻击,具有充分的意义。本文提供了适当的图形证明,以清晰的方式证明了所提出方法的完整性和性能。
{"title":"A Novel Cloud Security Enhancement Scheme to Defend against DDoS Attacks by using Deep Learning Strategy","authors":"R. S. Prabhu, A. Prema, E. Perumal","doi":"10.1109/ICECA55336.2022.10009177","DOIUrl":"https://doi.org/10.1109/ICECA55336.2022.10009177","url":null,"abstract":"Cloud computing is a recent technology that allows users to create services on-demand. Cloud computing has achieved benefits as a result of its self-service capability as well as on demand services. This offers significant adaptability to its users, as they simply pay for the services they require, rather than worrying about the expense of equipment or software support. The major benefit of utilizing the cloud -based environment in organization is to enhance the data maintenance scheme in an easy way as well as improve the integrity of service to avoid manual flaws over maintenance. However, the remote cloud based data maintenance and evaluation leads certain security related threats, especially with Distributed Denial of Service (DDoS) Attacks. These attacks are caused by attempts of intruders or hackers to hack the data present in the server end or traverse between client and server end. The attacker obtains the data and modifies it according to their convenience without the knowledge of the data owner. These kinds of attacks are most dangerous, and the confidentiality of the data is totally disturbed due to such threats. This paper is intended to design a novel deep learning strategy called Modified Learning based Cloud Attack Detection (MLCAD), in which it adapts the features from the conventional security handling scheme called Intelligent Attack Identification Strategy (IAIS). This proposed MLCAD approach identifies the DDoS attacks over cloud environment by means of analyzing the authorization and authentication logics of the respective user, examining the Internet Protocol (IP) Address mentioned in the relevant request as well as the metadata acquired from the user end. These provisions have made the proposed approach MLCAD to act better to identify the DDoS attack in an efficient manner with full significance. The paper provides the proper graphical proofs to prove the integrity and performance of the proposed approach in a clear manner.","PeriodicalId":356949,"journal":{"name":"2022 6th International Conference on Electronics, Communication and Aerospace Technology","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125245244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/ICECA55336.2022.10009164
M. Ghute, Y. Suryawanshi
Now days, the necessity of highly secured and reliable network is tremendously increased in the wireless communication network. There are various routing attacks occurs in wireless communication network there for secure routing is one of the most challenging research area in a mobile ad-hoc network-MANETs. Several methods are available for providing safety of the MANET, still various attacks are there which reduces network performance. Hence a strong cryptography technique is required to secure communication in MANET. An efficient cryptographic method is required, which will not only generate and maintain key also distribute it safely to the nodes which are not malicious. The method proposed here detects the nodes which are malicious and keeps them away from communication in the network so that packet delivery rate is increased by reducing delay in the network. The reliable communication in MANET is achieved by applying strong cryptography methods. In this paper comparison of classical, quantum and neural cryptography are given.
{"title":"Comparison of Cryptographic Techniques: Classical, Quantum and Neural","authors":"M. Ghute, Y. Suryawanshi","doi":"10.1109/ICECA55336.2022.10009164","DOIUrl":"https://doi.org/10.1109/ICECA55336.2022.10009164","url":null,"abstract":"Now days, the necessity of highly secured and reliable network is tremendously increased in the wireless communication network. There are various routing attacks occurs in wireless communication network there for secure routing is one of the most challenging research area in a mobile ad-hoc network-MANETs. Several methods are available for providing safety of the MANET, still various attacks are there which reduces network performance. Hence a strong cryptography technique is required to secure communication in MANET. An efficient cryptographic method is required, which will not only generate and maintain key also distribute it safely to the nodes which are not malicious. The method proposed here detects the nodes which are malicious and keeps them away from communication in the network so that packet delivery rate is increased by reducing delay in the network. The reliable communication in MANET is achieved by applying strong cryptography methods. In this paper comparison of classical, quantum and neural cryptography are given.","PeriodicalId":356949,"journal":{"name":"2022 6th International Conference on Electronics, Communication and Aerospace Technology","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122776796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/ICECA55336.2022.10009426
S. Srinivasan, K. Rajakumar
Hyperspectral imaging is one of the most widely used imaging techniques in numerous real-time applications. The detailed spectral information provided by hyperspectral imaging (HSI) is one of its main advantages. Each pixel has spectral information, and it can be effectively analyzed from hyperspectral images.The relationship among the high-resolution and object groups is carefully incorporated into the classification.Classifying hyperspectral images through conventional classification techniques is quite complex. Recently, deep learning techniques and their substantial potential in feature extraction have been proven in numerous research studies. Various non-linear problems are effectively solved through deep learning techniques. Conventional deep learning models based HSI classification approaches lags in performance, Thus, an efficient deep learning model, AmoebaNet-A, is presented in this research work for HSI classification. Additionally, nature inspired ant colony model is incorporated for network parameter optimization. Simulation analysis of the presented approach validates the improved performance using two data sets like the Indian Pines (IP) dataset and Italy's University of Pavia dataset (UP). Comparative analysis with existing approaches like optimized Self-organized map, EN-B4-SRO validates the higher performances of proposed model using the metrics like average accuracy, kappa coefficient and overall accuracy.
{"title":"Ant Colony Optimized AmoebaNet-A Algorithm for Hyperspectral Image Classification","authors":"S. Srinivasan, K. Rajakumar","doi":"10.1109/ICECA55336.2022.10009426","DOIUrl":"https://doi.org/10.1109/ICECA55336.2022.10009426","url":null,"abstract":"Hyperspectral imaging is one of the most widely used imaging techniques in numerous real-time applications. The detailed spectral information provided by hyperspectral imaging (HSI) is one of its main advantages. Each pixel has spectral information, and it can be effectively analyzed from hyperspectral images.The relationship among the high-resolution and object groups is carefully incorporated into the classification.Classifying hyperspectral images through conventional classification techniques is quite complex. Recently, deep learning techniques and their substantial potential in feature extraction have been proven in numerous research studies. Various non-linear problems are effectively solved through deep learning techniques. Conventional deep learning models based HSI classification approaches lags in performance, Thus, an efficient deep learning model, AmoebaNet-A, is presented in this research work for HSI classification. Additionally, nature inspired ant colony model is incorporated for network parameter optimization. Simulation analysis of the presented approach validates the improved performance using two data sets like the Indian Pines (IP) dataset and Italy's University of Pavia dataset (UP). Comparative analysis with existing approaches like optimized Self-organized map, EN-B4-SRO validates the higher performances of proposed model using the metrics like average accuracy, kappa coefficient and overall accuracy.","PeriodicalId":356949,"journal":{"name":"2022 6th International Conference on Electronics, Communication and Aerospace Technology","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131558760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/ICECA55336.2022.10009242
Arunkumar M S, S. P, S. R, D. S
The most significant aspects of creating smart cities is waste management. Recycling and landfilling are two methods of waste management that lead to the demolition of trash. Because of population expansion, it is difficult to maintain cleanliness in urban areas. Because the machine learning (ML) and Internet of Things (IoT) eases the gathering, integration, and processing of diverse kinds of information, it provides an agile solution for classification and real-time monitoring. It is our intention to create a waste management scheme based on the IoT. The IoT has been used to keep tabs on people's movements and to help with garbage management. A machine learning technique called Decision Tree with Extreme Learning Machine was used to analyze data about a city (DT-ELM). The single classifier requires iterative training, which is time consuming, but the suggested hybrid model does not. Decision trees use traits that are good at classifying. Additional weights for the selected features are calculated to improve their categorization accuracy. We use the entropy theory to map the decision tree to ELM in order to get accurate prediction results. The garbage kind, truck size, and waste source may all be analyzed thanks to the network. In order to take the proper action, the waste management centers were informed of this information. An experiment was conducted to test the efficiency of an IoT -based trash management system.
{"title":"An Internet of Things based Waste Management System using Hybrid Machine Learning Technique","authors":"Arunkumar M S, S. P, S. R, D. S","doi":"10.1109/ICECA55336.2022.10009242","DOIUrl":"https://doi.org/10.1109/ICECA55336.2022.10009242","url":null,"abstract":"The most significant aspects of creating smart cities is waste management. Recycling and landfilling are two methods of waste management that lead to the demolition of trash. Because of population expansion, it is difficult to maintain cleanliness in urban areas. Because the machine learning (ML) and Internet of Things (IoT) eases the gathering, integration, and processing of diverse kinds of information, it provides an agile solution for classification and real-time monitoring. It is our intention to create a waste management scheme based on the IoT. The IoT has been used to keep tabs on people's movements and to help with garbage management. A machine learning technique called Decision Tree with Extreme Learning Machine was used to analyze data about a city (DT-ELM). The single classifier requires iterative training, which is time consuming, but the suggested hybrid model does not. Decision trees use traits that are good at classifying. Additional weights for the selected features are calculated to improve their categorization accuracy. We use the entropy theory to map the decision tree to ELM in order to get accurate prediction results. The garbage kind, truck size, and waste source may all be analyzed thanks to the network. In order to take the proper action, the waste management centers were informed of this information. An experiment was conducted to test the efficiency of an IoT -based trash management system.","PeriodicalId":356949,"journal":{"name":"2022 6th International Conference on Electronics, Communication and Aerospace Technology","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121869795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1109/ICECA55336.2022.10009522
M. Vanitha, Guntamadugu Ganesh, G. Thirumalesh, E. Tharun
Convolutional Neural Networks (CNNs) have undergone accelerated growth due to their capacity to resolve challenging image recognition problems. They are utilized to handle an increasing number of difficulties, such as speech recognition, and the segmentation and categorization of images. The ever-increasing processing needs of CNNs are spawning the market for hardware support strategies. Moreover, CNN workloads are of a streaming nature, which makes them a good choice for reconfigurable hardware architectures like as Field Programmable Gate Arrays (FPGAs). Neural networks are a sort of computer architecture inspired by the way the human brain processes information. A artificial neural network consists of a large number of densely interconnected individual processors, or neurons. By adding a simplified bypass zero multiplier to the neural computing of the system, the proposed system may reduce the processing time and complexity while handling a broad range of datasets. The suggested CNN comprises of two hidden layers and two convolutional layers. The proposed CNN is implemented on a Xilinx zynq 7z020 FPGA using the verilog HDL programming language, with the consideration for space utilization, power estimation, and logical utilization.
{"title":"Reconfigurable Hardware Implementation of CNN Accelerator using Zero-bypass Multiplier","authors":"M. Vanitha, Guntamadugu Ganesh, G. Thirumalesh, E. Tharun","doi":"10.1109/ICECA55336.2022.10009522","DOIUrl":"https://doi.org/10.1109/ICECA55336.2022.10009522","url":null,"abstract":"Convolutional Neural Networks (CNNs) have undergone accelerated growth due to their capacity to resolve challenging image recognition problems. They are utilized to handle an increasing number of difficulties, such as speech recognition, and the segmentation and categorization of images. The ever-increasing processing needs of CNNs are spawning the market for hardware support strategies. Moreover, CNN workloads are of a streaming nature, which makes them a good choice for reconfigurable hardware architectures like as Field Programmable Gate Arrays (FPGAs). Neural networks are a sort of computer architecture inspired by the way the human brain processes information. A artificial neural network consists of a large number of densely interconnected individual processors, or neurons. By adding a simplified bypass zero multiplier to the neural computing of the system, the proposed system may reduce the processing time and complexity while handling a broad range of datasets. The suggested CNN comprises of two hidden layers and two convolutional layers. The proposed CNN is implemented on a Xilinx zynq 7z020 FPGA using the verilog HDL programming language, with the consideration for space utilization, power estimation, and logical utilization.","PeriodicalId":356949,"journal":{"name":"2022 6th International Conference on Electronics, Communication and Aerospace Technology","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134478553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}