Economic growth of country largely depends on crop production quantity and quality. Among various crops, cotton is one of the major crops in India, where 23 percent of cotton gets exported to various other countries. To classify these cotton crops, farmers consume much time, and this remains inaccurate most probably. Hence, to eradicate this issue, cotton crops are classified using deep learning model, named LeNet in this research paper. Novelty of this paper lies in utilization of hybrid optimization algorithm, named proposed sine tangent search algorithm for training LeNet. Initially, hyperspectral image is pre-processed by anisotropic diffusion, and then allowed for further processing. Also, SegNet is deep learning model that is used for segmenting pre-processed image. For perfect and clear details of pre-processed image, feature extraction is carried out, wherein vegetation index and spectral spatial features of image are found accurately. Finally, cotton crop is classified from segmented image and features extracted, using LeNet that is trained by sine tangent search algorithm. Here, sine tangent search algorithm is formed by hybridization of sine cosine algorithm and tangent search algorithm. Then, performance of sine tangent search algorithm enabled LeNet is assessed with evaluation metrics along with Receiver Operating Characteristic (ROC) curve. These metrics showed that sine tangent search algorithm enabled LeNet is highly effective for cotton crop classification with superior values of accuracy of 91.7%, true negative rate of 92%, and true positive rate of 92%.
{"title":"Sine tangent search algorithm enabled LeNet for cotton crop classification using satellite image","authors":"Devyani Jadhav Bhamare, Ramesh Pudi, Garigipati Rama Krishna","doi":"10.3233/mgs-230055","DOIUrl":"https://doi.org/10.3233/mgs-230055","url":null,"abstract":"Economic growth of country largely depends on crop production quantity and quality. Among various crops, cotton is one of the major crops in India, where 23 percent of cotton gets exported to various other countries. To classify these cotton crops, farmers consume much time, and this remains inaccurate most probably. Hence, to eradicate this issue, cotton crops are classified using deep learning model, named LeNet in this research paper. Novelty of this paper lies in utilization of hybrid optimization algorithm, named proposed sine tangent search algorithm for training LeNet. Initially, hyperspectral image is pre-processed by anisotropic diffusion, and then allowed for further processing. Also, SegNet is deep learning model that is used for segmenting pre-processed image. For perfect and clear details of pre-processed image, feature extraction is carried out, wherein vegetation index and spectral spatial features of image are found accurately. Finally, cotton crop is classified from segmented image and features extracted, using LeNet that is trained by sine tangent search algorithm. Here, sine tangent search algorithm is formed by hybridization of sine cosine algorithm and tangent search algorithm. Then, performance of sine tangent search algorithm enabled LeNet is assessed with evaluation metrics along with Receiver Operating Characteristic (ROC) curve. These metrics showed that sine tangent search algorithm enabled LeNet is highly effective for cotton crop classification with superior values of accuracy of 91.7%, true negative rate of 92%, and true positive rate of 92%.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":null,"pages":null},"PeriodicalIF":0.7,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140079913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Laghari, Hang Li, Shoulin Yin, Shahid Karim, A. Khan, Muhammad Ibrar
Nowadays, Blockchain is very popular among industries to solve security issues of information systems. The Internet of Things (IoT) has security issues during multi-organization communication, and any organization approves no such robust framework. The combination of blockchain technology with IoT makes it more secure and solves the problem of multi-organization communication issues. There are many blockchain applications developed for the security of IoT, but these are only suitable for some types of IoT infrastructure. This paper introduces the architecture and case studies of blockchain applications. The application scenarios of the Blockchain combined with the Internet of Things, and finally discussed four common issues of the combination of the Blockchain and the Internet of Things.
{"title":"Blockchain applications for Internet of Things (IoT): A review","authors":"A. Laghari, Hang Li, Shoulin Yin, Shahid Karim, A. Khan, Muhammad Ibrar","doi":"10.3233/mgs-230074","DOIUrl":"https://doi.org/10.3233/mgs-230074","url":null,"abstract":"Nowadays, Blockchain is very popular among industries to solve security issues of information systems. The Internet of Things (IoT) has security issues during multi-organization communication, and any organization approves no such robust framework. The combination of blockchain technology with IoT makes it more secure and solves the problem of multi-organization communication issues. There are many blockchain applications developed for the security of IoT, but these are only suitable for some types of IoT infrastructure. This paper introduces the architecture and case studies of blockchain applications. The application scenarios of the Blockchain combined with the Internet of Things, and finally discussed four common issues of the combination of the Blockchain and the Internet of Things.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":null,"pages":null},"PeriodicalIF":0.7,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140079616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud computing epitomizes an important invention in the field of Information Technology, which presents users with a way of providing on-demand access to a pool of shared computing resources. A major challenge faced by the cloud system is to assign the exact quantity of resources to the users based on the demand, while meeting the Service Level Agreement (SLA). Elasticity is a major aspect that provides the cloud with the capability of adding and removing resources “on the fly” for handling load variations. However, elastic scaling requires suspension of the application tasks forcibly, while performing resource distribution; thereby Quality of Service (QoS) gets affected. In this research, an elastic scaling approach based on optimization is developed which aims at attaining an improved user experience. Here, load prediction is performed based on various factors, like bandwidth, CPU, and memory. Later, horizontal as well as vertical scaling is performed based on the predicted load using the devised leader Harris honey badger algorithm. The devised optimization enabled elastic scaling is evaluated for its effectiveness based on metrics, such as predicted load error, cost, and resource utilization, and is found to have attained values of 0.0193, 153.581, and 0.3217.
{"title":"Optimization enabled elastic scaling in cloud based on predicted load for resource management","authors":"Naimisha Shashikant Trivedi, Shailesh D. Panchal","doi":"10.3233/mgs-230003","DOIUrl":"https://doi.org/10.3233/mgs-230003","url":null,"abstract":"Cloud computing epitomizes an important invention in the field of Information Technology, which presents users with a way of providing on-demand access to a pool of shared computing resources. A major challenge faced by the cloud system is to assign the exact quantity of resources to the users based on the demand, while meeting the Service Level Agreement (SLA). Elasticity is a major aspect that provides the cloud with the capability of adding and removing resources “on the fly” for handling load variations. However, elastic scaling requires suspension of the application tasks forcibly, while performing resource distribution; thereby Quality of Service (QoS) gets affected. In this research, an elastic scaling approach based on optimization is developed which aims at attaining an improved user experience. Here, load prediction is performed based on various factors, like bandwidth, CPU, and memory. Later, horizontal as well as vertical scaling is performed based on the predicted load using the devised leader Harris honey badger algorithm. The devised optimization enabled elastic scaling is evaluated for its effectiveness based on metrics, such as predicted load error, cost, and resource utilization, and is found to have attained values of 0.0193, 153.581, and 0.3217.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":null,"pages":null},"PeriodicalIF":0.7,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140080210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate and early detection of plant disease is significant for stable and proper agriculture and also for preventing the unwanted waste of financial and other possessions. Hence, a new technique is devised in this work, where geese jellyfish search optimization trained deep learning is used for multiclass detection of plant disease utilizing plant leaf images. At first, the input leaves of the plant image acquired from the database are pre-processed utilizing the Kalman filter. Then, the plant leaf segmentation is done by LinK-Net, where the training function of LinK-Net is processed by the proposed geese jellyfish search optimization, which is formed using wild geese migration optimization and jellyfish search optimizer. Then, image augmentation is carried out and then the feature extraction is done. Consequently, the classification of plant leaf type is processed, which is employed by Deep Q-Network (DQN), which is structurally adapted by the proposed geese jellyfish search optimization. At last, multi-label plant leaf disease is detected based on DQN. Moreover, the proposed geese jellyfish search optimization based DQN obtains an accuracy of 89.44%, true positive rate of 90.18%, and false positive rate of 10.56% respectively.
{"title":"Geese jellyfish search optimization trained deep learning for multiclass plant disease detection using leaf images","authors":"Bandi Ranjitha, Sampath A K","doi":"10.3233/mgs-230061","DOIUrl":"https://doi.org/10.3233/mgs-230061","url":null,"abstract":"Accurate and early detection of plant disease is significant for stable and proper agriculture and also for preventing the unwanted waste of financial and other possessions. Hence, a new technique is devised in this work, where geese jellyfish search optimization trained deep learning is used for multiclass detection of plant disease utilizing plant leaf images. At first, the input leaves of the plant image acquired from the database are pre-processed utilizing the Kalman filter. Then, the plant leaf segmentation is done by LinK-Net, where the training function of LinK-Net is processed by the proposed geese jellyfish search optimization, which is formed using wild geese migration optimization and jellyfish search optimizer. Then, image augmentation is carried out and then the feature extraction is done. Consequently, the classification of plant leaf type is processed, which is employed by Deep Q-Network (DQN), which is structurally adapted by the proposed geese jellyfish search optimization. At last, multi-label plant leaf disease is detected based on DQN. Moreover, the proposed geese jellyfish search optimization based DQN obtains an accuracy of 89.44%, true positive rate of 90.18%, and false positive rate of 10.56% respectively.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":null,"pages":null},"PeriodicalIF":0.7,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140080346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A distributed system with a shared resource pool offers cloud computing services. According to the provider’s policy, customers can enjoy continuous access to these resources. Every time a job is transferred to the cloud to be carried out, the environment must be appropriately planned. A sufficient number of virtual machines (VM) must be accessible on the backend to do this. As a result, the scheduling method determines how well the system functions. An intelligent scheduling algorithm distributes the jobs among all VMs to balance the overall workload. This problem falls into the category of NP-Hard problems and is regarded as a load balancing problem. With spider monkey optimization, we have implemented a fresh strategy for more dependable and efficient load balancing in cloud environments. The suggested optimization strategy aims to boost performance by choosing the least-loaded VM to distribute the workloads. The simulation results clearly show that the proposed algorithm performs better regarding load balancing, reaction time, make span and resource utilization. The experimental results outperform the available approaches.
{"title":"Load balancing model for cloud environment using swarm intelligence technique","authors":"G. Verma, Soumen Kanrar","doi":"10.3233/mgs-230021","DOIUrl":"https://doi.org/10.3233/mgs-230021","url":null,"abstract":"A distributed system with a shared resource pool offers cloud computing services. According to the provider’s policy, customers can enjoy continuous access to these resources. Every time a job is transferred to the cloud to be carried out, the environment must be appropriately planned. A sufficient number of virtual machines (VM) must be accessible on the backend to do this. As a result, the scheduling method determines how well the system functions. An intelligent scheduling algorithm distributes the jobs among all VMs to balance the overall workload. This problem falls into the category of NP-Hard problems and is regarded as a load balancing problem. With spider monkey optimization, we have implemented a fresh strategy for more dependable and efficient load balancing in cloud environments. The suggested optimization strategy aims to boost performance by choosing the least-loaded VM to distribute the workloads. The simulation results clearly show that the proposed algorithm performs better regarding load balancing, reaction time, make span and resource utilization. The experimental results outperform the available approaches.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":null,"pages":null},"PeriodicalIF":0.7,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138998549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud computing is an upcoming technology that has garnered interest from academic as well as commercial domains. Cloud offers the advantage of providing huge computing capability as well as resources that are positioned at multiple locations irrespective of time or location of the user. Cloud utilizes the concept of virtualization to dispatch the multiple tasks encountered simultaneously to the server. However, allocation of tasks to the heterogeneous servers requires that the load is balanced among the servers. To address this issue, a trust based dynamic load balancing algorithm in distributed file system is proposed. Load balancing is performed by predicting the loads in the physical machine with the help of the Rider optimization algorithm-based Neural Network (RideNN). Further, load balancing is carried out using the proposed Fractional Social Deer Optimization (FSDO) algorithm, where the virtual machine migration is performed based on the load condition in the physical machine. Later, replica management is accomplished for managing the replica in distributed file system with the help of the devised FSDO algorithm. Moreover, the proposed FSDO based dynamic load balancing algorithm is evaluated for its performance based on parameters, like predicted load, prediction error, trust, cost and energy consumption with values 0.051, 0.723, 0.390 and 0.431J correspondingly.
{"title":"Hybrid trust-based optimized virtual machine migration for dynamic load balancing and replica management in heterogeneous cloud","authors":"M. H. Nebagiri, Latha Pillappa Hanumanthappa","doi":"10.3233/mgs-230025","DOIUrl":"https://doi.org/10.3233/mgs-230025","url":null,"abstract":"Cloud computing is an upcoming technology that has garnered interest from academic as well as commercial domains. Cloud offers the advantage of providing huge computing capability as well as resources that are positioned at multiple locations irrespective of time or location of the user. Cloud utilizes the concept of virtualization to dispatch the multiple tasks encountered simultaneously to the server. However, allocation of tasks to the heterogeneous servers requires that the load is balanced among the servers. To address this issue, a trust based dynamic load balancing algorithm in distributed file system is proposed. Load balancing is performed by predicting the loads in the physical machine with the help of the Rider optimization algorithm-based Neural Network (RideNN). Further, load balancing is carried out using the proposed Fractional Social Deer Optimization (FSDO) algorithm, where the virtual machine migration is performed based on the load condition in the physical machine. Later, replica management is accomplished for managing the replica in distributed file system with the help of the devised FSDO algorithm. Moreover, the proposed FSDO based dynamic load balancing algorithm is evaluated for its performance based on parameters, like predicted load, prediction error, trust, cost and energy consumption with values 0.051, 0.723, 0.390 and 0.431J correspondingly.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":null,"pages":null},"PeriodicalIF":0.7,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138998387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social platform have disseminated the news in rapid speed and has been considered an important news resource for many people over worldwide because of easy access and less cost benefits when compared with the traditional news organizations. Fake news is the news deliberately written by bad writers that manipulates the original contents and this rapid dissemination of fake news may mislead the people in the society. As a result, it is critical to investigate the veracity of the data leaked via social media platforms. Even so, the reliability of information reported via this platform is still doubtful and remains a significant obstacle. As a result, this study proposes a promising technique for identifying fake information in social media called Adam Adadelta Optimization based Deep Long Short-Term Memory (Deep LSTM). The tokenization operation in this case is carried out with the Bidirectional Encoder Representations from Transformers (BERT) approach. The measurement of the features is reduced with the assistance of Kernel Linear Discriminant Analysis (LDA), and Singular Value Decomposition (SVD) and the top-N attributes are chosen by employing Renyi joint entropy. Furthermore, the LSTM is applied to identify false information in social media, with Adam Adadelta Optimization, which comprises a combo of Adam Optimization and Adadelta Optimization . The Deep LSTM based on Adam Adadelta Optimization achieved maximum accuracy, sensitivity, specificity of 0.936, 0.942, and 0.925.
社交平台传播新闻的速度非常快,与传统的新闻机构相比,社交平台具有获取方便、成本低的优势,因此被认为是全球许多人的重要新闻资源。假新闻是由不良撰稿人故意篡改原始内容撰写的新闻,这种假新闻的快速传播可能会误导社会大众。因此,调查通过社交媒体平台泄露的数据的真实性至关重要。即便如此,通过这一平台报道的信息的可靠性仍然值得怀疑,并且仍然是一个重大障碍。因此,本研究提出了一种识别社交媒体虚假信息的有效技术,即基于亚当-阿达德尔塔优化的深度长短期记忆(Deep LSTM)。在这种情况下,标记化操作是通过变压器双向编码器表示法(BERT)进行的。在核线性判别分析(LDA)和奇异值分解(SVD)的帮助下,减少了特征的测量,并通过使用仁义联合熵(Renyi joint entropy)选择了前 N 个属性。此外,LSTM 还利用 Adam Adadelta Optimization(由 Adam Optimization 和 Adadelta Optimization 组合而成)识别社交媒体中的虚假信息。基于 Adam Adadelta 优化的深度 LSTM 的准确度、灵敏度和特异度分别达到了 0.936、0.942 和 0.925。
{"title":"Adam Adadelta Optimization based bidirectional encoder representations from transformers model for fake news detection on social media","authors":"S. T. S., P.S. Sreeja","doi":"10.3233/mgs-230033","DOIUrl":"https://doi.org/10.3233/mgs-230033","url":null,"abstract":"Social platform have disseminated the news in rapid speed and has been considered an important news resource for many people over worldwide because of easy access and less cost benefits when compared with the traditional news organizations. Fake news is the news deliberately written by bad writers that manipulates the original contents and this rapid dissemination of fake news may mislead the people in the society. As a result, it is critical to investigate the veracity of the data leaked via social media platforms. Even so, the reliability of information reported via this platform is still doubtful and remains a significant obstacle. As a result, this study proposes a promising technique for identifying fake information in social media called Adam Adadelta Optimization based Deep Long Short-Term Memory (Deep LSTM). The tokenization operation in this case is carried out with the Bidirectional Encoder Representations from Transformers (BERT) approach. The measurement of the features is reduced with the assistance of Kernel Linear Discriminant Analysis (LDA), and Singular Value Decomposition (SVD) and the top-N attributes are chosen by employing Renyi joint entropy. Furthermore, the LSTM is applied to identify false information in social media, with Adam Adadelta Optimization, which comprises a combo of Adam Optimization and Adadelta Optimization . The Deep LSTM based on Adam Adadelta Optimization achieved maximum accuracy, sensitivity, specificity of 0.936, 0.942, and 0.925.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":null,"pages":null},"PeriodicalIF":0.7,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138997736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jianfeng Meng, Gongpeng Zhang, Zihan Li, Hongji Yang
Crowdsourcing community, as an important way for enterprises to obtain external public innovative knowledge in the era of the Internet and the rise of users, has a very broad application prospect and research value. However, the influence of social preference is seldom considered in the promotion of knowledge sharing in crowdsourcing communities. Therefore, on the basis of complex network evolutionary game theory and social preference theory, an evolutionary game model of knowledge sharing among crowdsourcing community users based on the characteristics of small world network structure is constructed. Through Matlab programming, the evolution and dynamic equilibrium of knowledge sharing among crowdsourcing community users on this network structure are simulated, and the experimental results without considering social preference and social preference are compared and analysed, and it is found that social preference can significantly promote the evolution of knowledge sharing in crowdsourcing communities. This research expands the research scope of the combination and application of complex network games and other disciplines, enriches the theoretical perspective of knowledge sharing research in crowdsourcing communities, and has a strong guiding significance for promoting knowledge sharing in crowdsourcing communities.
{"title":"An evolutionary mechanism of social preference for knowledge sharing in crowdsourcing communities","authors":"Jianfeng Meng, Gongpeng Zhang, Zihan Li, Hongji Yang","doi":"10.3233/mgs-221532","DOIUrl":"https://doi.org/10.3233/mgs-221532","url":null,"abstract":"Crowdsourcing community, as an important way for enterprises to obtain external public innovative knowledge in the era of the Internet and the rise of users, has a very broad application prospect and research value. However, the influence of social preference is seldom considered in the promotion of knowledge sharing in crowdsourcing communities. Therefore, on the basis of complex network evolutionary game theory and social preference theory, an evolutionary game model of knowledge sharing among crowdsourcing community users based on the characteristics of small world network structure is constructed. Through Matlab programming, the evolution and dynamic equilibrium of knowledge sharing among crowdsourcing community users on this network structure are simulated, and the experimental results without considering social preference and social preference are compared and analysed, and it is found that social preference can significantly promote the evolution of knowledge sharing in crowdsourcing communities. This research expands the research scope of the combination and application of complex network games and other disciplines, enriches the theoretical perspective of knowledge sharing research in crowdsourcing communities, and has a strong guiding significance for promoting knowledge sharing in crowdsourcing communities.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":null,"pages":null},"PeriodicalIF":0.7,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138998918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Classification of land cover using satellite images was a major area for the past few years. A raise in the quantity of data obtained by satellite image systems insists on the requirement for an automated tool for classification. Satellite images demonstrate temporal or/and spatial dependencies, where the traditional artificial intelligence approaches do not succeed to execute well. Hence, the suggested approach utilizes a brand-new framework for classifying land cover Histogram Linearisation is first carried out throughout pre-processing. The features are then retrieved, including spectral and spatial features. Additionally, the generated features are merged throughout the feature fusion process. Finally, at the classification phase, an optimized Long Short-Term Memory (LSTM) and Deep Belief Network (DBN) are introduced that portrays classified results in a precise way. Especially, the Opposition Behavior Learning based Water Wave Optimization (OBL-WWO) model is used for tuning the weights of LSTM and DBN. Atlast, many metrics illustrate the new approach’s effectiveness.
{"title":"Feature level fusion for land cover classification with landsat images: A hybrid classification model","authors":"Malige Gangappa","doi":"10.3233/mgs-230034","DOIUrl":"https://doi.org/10.3233/mgs-230034","url":null,"abstract":"Classification of land cover using satellite images was a major area for the past few years. A raise in the quantity of data obtained by satellite image systems insists on the requirement for an automated tool for classification. Satellite images demonstrate temporal or/and spatial dependencies, where the traditional artificial intelligence approaches do not succeed to execute well. Hence, the suggested approach utilizes a brand-new framework for classifying land cover Histogram Linearisation is first carried out throughout pre-processing. The features are then retrieved, including spectral and spatial features. Additionally, the generated features are merged throughout the feature fusion process. Finally, at the classification phase, an optimized Long Short-Term Memory (LSTM) and Deep Belief Network (DBN) are introduced that portrays classified results in a precise way. Especially, the Opposition Behavior Learning based Water Wave Optimization (OBL-WWO) model is used for tuning the weights of LSTM and DBN. Atlast, many metrics illustrate the new approach’s effectiveness.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135302173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In data mining, deep learning and machine learning models face class imbalance problems, which result in a lower detection rate for minority class samples. An improved Synthetic Minority Over-sampling Technique (SMOTE) is introduced for effective imbalanced data classification. After collecting the raw data from PIMA, Yeast, E.coli, and Breast cancer Wisconsin databases, the pre-processing is performed using min-max normalization, cleaning, integration, and data transformation techniques to achieve data with better uniqueness, consistency, completeness and validity. An improved SMOTE algorithm is applied to the pre-processed data for proper data distribution, and then the properly distributed data is fed to the machine learning classifiers: Support Vector Machine (SVM), Random Forest, and Decision Tree for data classification. Experimental examination confirmed that the improved SMOTE algorithm with random forest attained significant classification results with Area under Curve (AUC) of 94.30%, 91%, 96.40%, and 99.40% on the PIMA, Yeast, E.coli, and Breast cancer Wisconsin databases.
在数据挖掘中,深度学习和机器学习模型面临着类不平衡问题,导致对少数类样本的检测率较低。提出了一种改进的合成少数派过采样技术(SMOTE),用于非平衡数据的有效分类。收集PIMA、Yeast、E.coli、Breast cancer Wisconsin数据库的原始数据后,采用min-max归一化、清洗、整合、数据转换等技术进行预处理,使数据具有更好的唯一性、一致性、完整性和有效性。采用改进的SMOTE算法对预处理数据进行适当的数据分布,然后将适当分布的数据提供给机器学习分类器:支持向量机(SVM)、随机森林(Random Forest)和决策树(Decision Tree)进行数据分类。实验验证改进的SMOTE算法在PIMA、酵母菌、大肠杆菌和乳腺癌Wisconsin数据库上取得了显著的分类效果,曲线下面积(Area under Curve, AUC)分别为94.30%、91%、96.40%和99.40%。
{"title":"Imbalanced data classification using improved synthetic minority over-sampling technique","authors":"Yamijala Anusha, R. Visalakshi, Konda Srinivas","doi":"10.3233/mgs-230007","DOIUrl":"https://doi.org/10.3233/mgs-230007","url":null,"abstract":"In data mining, deep learning and machine learning models face class imbalance problems, which result in a lower detection rate for minority class samples. An improved Synthetic Minority Over-sampling Technique (SMOTE) is introduced for effective imbalanced data classification. After collecting the raw data from PIMA, Yeast, E.coli, and Breast cancer Wisconsin databases, the pre-processing is performed using min-max normalization, cleaning, integration, and data transformation techniques to achieve data with better uniqueness, consistency, completeness and validity. An improved SMOTE algorithm is applied to the pre-processed data for proper data distribution, and then the properly distributed data is fed to the machine learning classifiers: Support Vector Machine (SVM), Random Forest, and Decision Tree for data classification. Experimental examination confirmed that the improved SMOTE algorithm with random forest attained significant classification results with Area under Curve (AUC) of 94.30%, 91%, 96.40%, and 99.40% on the PIMA, Yeast, E.coli, and Breast cancer Wisconsin databases.","PeriodicalId":43659,"journal":{"name":"Multiagent and Grid Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135302332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}