Cotton, a crucial cash crop in Pakistan, faces persistent threats from diseases, notably the Cotton Leaf Curl Virus (CLCuV). Detecting these diseases accurately and early is vital for effective management. This paper offers a comprehensive account of the process involved in collecting, preprocessing, and analyzing an extensive dataset of cotton leaf images. The primary aim of this dataset is to support automated disease detection systems. We delve into the data collection procedure, distribution of the dataset, preprocessing stages, feature extraction methods, and potential applications. Furthermore, we present the preliminary findings of our analyses and emphasize the significance of such datasets in advancing agricultural technology. The impact of these factors on plant growth is significant, but the intrusion of plant diseases, such as Cotton Leaf Curl Disease (CLCuD) caused by the Cotton Leaf Curl Gemini Virus (CLCuV), poses a substantial threat to cotton yield. Identifying CLCuD promptly, especially in areas lacking critical infrastructure, remains a formidable challenge. Despite the substantial research dedicated to cotton leaf diseases in agriculture, deep learning technology continues to play a vital role across various sectors. In this study, we harness the power of two deep learning models, specifically the Convolutional Neural Network (CNN). We evaluate these models using two distinct datasets: one from the publicly available Kaggle dataset and the other from our proprietary collection, encompassing a total of 1349 images capturing both healthy and disease-affected cotton leaves. Our meticulously curated dataset is categorized into five groups: Healthy, Fully Susceptible, Partially Susceptible, Fully Resistant, and Partially Resistant. Agricultural experts annotated our dataset based on their expertise in identifying abnormal growth patterns and appearances. Data augmentation enhances the precision of model performance, with deep features extracted to support both training and testing efforts. Notably, the CNN model outperforms other models, achieving an impressive accuracy rate of 99% when tested against our proprietary dataset.
{"title":"Detection of cotton leaf curl disease’s susceptibility scale level based on deep learning","authors":"Rubaina Nazeer, Sajid Ali, Zhihua Hu, Ghulam Jillani Ansari, Muna Al-Razgan, Emad Mahrous Awwad, Yazeed Yasin Ghadi","doi":"10.1186/s13677-023-00582-9","DOIUrl":"https://doi.org/10.1186/s13677-023-00582-9","url":null,"abstract":"Cotton, a crucial cash crop in Pakistan, faces persistent threats from diseases, notably the Cotton Leaf Curl Virus (CLCuV). Detecting these diseases accurately and early is vital for effective management. This paper offers a comprehensive account of the process involved in collecting, preprocessing, and analyzing an extensive dataset of cotton leaf images. The primary aim of this dataset is to support automated disease detection systems. We delve into the data collection procedure, distribution of the dataset, preprocessing stages, feature extraction methods, and potential applications. Furthermore, we present the preliminary findings of our analyses and emphasize the significance of such datasets in advancing agricultural technology. The impact of these factors on plant growth is significant, but the intrusion of plant diseases, such as Cotton Leaf Curl Disease (CLCuD) caused by the Cotton Leaf Curl Gemini Virus (CLCuV), poses a substantial threat to cotton yield. Identifying CLCuD promptly, especially in areas lacking critical infrastructure, remains a formidable challenge. Despite the substantial research dedicated to cotton leaf diseases in agriculture, deep learning technology continues to play a vital role across various sectors. In this study, we harness the power of two deep learning models, specifically the Convolutional Neural Network (CNN). We evaluate these models using two distinct datasets: one from the publicly available Kaggle dataset and the other from our proprietary collection, encompassing a total of 1349 images capturing both healthy and disease-affected cotton leaves. Our meticulously curated dataset is categorized into five groups: Healthy, Fully Susceptible, Partially Susceptible, Fully Resistant, and Partially Resistant. Agricultural experts annotated our dataset based on their expertise in identifying abnormal growth patterns and appearances. Data augmentation enhances the precision of model performance, with deep features extracted to support both training and testing efforts. Notably, the CNN model outperforms other models, achieving an impressive accuracy rate of 99% when tested against our proprietary dataset.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139968545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-23DOI: 10.1186/s13677-024-00595-y
S. Gayathri, D. Surendran
Anomaly detection in Wireless Sensor Networks (WSNs) is critical for their reliable and secure operation. Optimizing resource efficiency is crucial for reducing energy consumption. Two new algorithms developed for anomaly detection in WSNs—Ensemble Federated Learning (EFL) with Cloud Integration and Online Anomaly Detection with Energy-Efficient Techniques (OAD-EE) with Cloud-based Model Aggregation. EFL with Cloud Integration uses ensemble methods and federated learning to enhance detection accuracy and data privacy. OAD-EE with Cloud-based Model Aggregation uses online learning and energy-efficient techniques to conserve energy on resource-constrained sensor nodes. By combining EFL and OAD-EE, a comprehensive and efficient framework for anomaly detection in WSNs can be created. Experimental results show that EFL with Cloud Integration achieves the highest detection accuracy, while OAD-EE with Cloud-based Model Aggregation has the lowest energy consumption and fastest detection time among all algorithms, making it suitable for real-time applications. The unified algorithm contributes to the system's overall efficiency, scalability, and real-time response. By integrating cloud computing, this algorithm opens new avenues for advanced WSN applications. These promising approaches for anomaly detection in resource constrained and large-scale WSNs are beneficial for industrial applications.
{"title":"Unified ensemble federated learning with cloud computing for online anomaly detection in energy-efficient wireless sensor networks","authors":"S. Gayathri, D. Surendran","doi":"10.1186/s13677-024-00595-y","DOIUrl":"https://doi.org/10.1186/s13677-024-00595-y","url":null,"abstract":"Anomaly detection in Wireless Sensor Networks (WSNs) is critical for their reliable and secure operation. Optimizing resource efficiency is crucial for reducing energy consumption. Two new algorithms developed for anomaly detection in WSNs—Ensemble Federated Learning (EFL) with Cloud Integration and Online Anomaly Detection with Energy-Efficient Techniques (OAD-EE) with Cloud-based Model Aggregation. EFL with Cloud Integration uses ensemble methods and federated learning to enhance detection accuracy and data privacy. OAD-EE with Cloud-based Model Aggregation uses online learning and energy-efficient techniques to conserve energy on resource-constrained sensor nodes. By combining EFL and OAD-EE, a comprehensive and efficient framework for anomaly detection in WSNs can be created. Experimental results show that EFL with Cloud Integration achieves the highest detection accuracy, while OAD-EE with Cloud-based Model Aggregation has the lowest energy consumption and fastest detection time among all algorithms, making it suitable for real-time applications. The unified algorithm contributes to the system's overall efficiency, scalability, and real-time response. By integrating cloud computing, this algorithm opens new avenues for advanced WSN applications. These promising approaches for anomaly detection in resource constrained and large-scale WSNs are beneficial for industrial applications.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139952788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-21DOI: 10.1186/s13677-024-00601-3
Jing Zhu, Chuanjiang Hu, Edris Khezri, Mohd Mustafa Mohd Ghazali
The integration of edge intelligence (EI) in animation design, particularly when dealing with large models, represents a significant advancement in the field of computer graphics and animation. This survey aims to provide a comprehensive overview of the current state and future prospects of EI-assisted animation design, focusing on the challenges and opportunities presented by large model implementations. Edge intelligence, characterized by its decentralized processing and real-time data analysis capabilities, offers a transformative approach to handling the computational and data-intensive demands of modern animation. This paper explores various aspects of EI in animation and then delves into the specifics of large models in animation, examining their evolution, current trends, and the inherent challenges in their implementation. Finally, the paper addresses the challenges and solutions in integrating EI with large models in animation, proposing future research directions. This survey serves as a valuable resource for researchers, animators, and technologists, offering insights into the potential of EI in revolutionizing animation design and opening new avenues for creative and efficient animation production.
将边缘智能(EI)整合到动画设计中,特别是在处理大型模型时,是计算机图形学和动画领域的一大进步。本调查旨在全面概述 EI 辅助动画设计的现状和未来前景,重点关注大型模型实施所带来的挑战和机遇。边缘智能以其分散处理和实时数据分析能力为特点,为处理现代动画的计算和数据密集型需求提供了一种变革性方法。本文探讨了动画中的边缘智能的各个方面,然后深入研究了动画中大型模型的具体情况,考察了它们的演变、当前趋势以及实施过程中固有的挑战。最后,本文探讨了将动画中的电子交互与大型模型相结合所面临的挑战和解决方案,并提出了未来的研究方向。本调查报告为研究人员、动画制作人员和技术人员提供了宝贵的资源,让他们深入了解 EI 在革新动画设计方面的潜力,并为创造性和高效的动画制作开辟了新的途径。
{"title":"Edge intelligence-assisted animation design with large models: a survey","authors":"Jing Zhu, Chuanjiang Hu, Edris Khezri, Mohd Mustafa Mohd Ghazali","doi":"10.1186/s13677-024-00601-3","DOIUrl":"https://doi.org/10.1186/s13677-024-00601-3","url":null,"abstract":"The integration of edge intelligence (EI) in animation design, particularly when dealing with large models, represents a significant advancement in the field of computer graphics and animation. This survey aims to provide a comprehensive overview of the current state and future prospects of EI-assisted animation design, focusing on the challenges and opportunities presented by large model implementations. Edge intelligence, characterized by its decentralized processing and real-time data analysis capabilities, offers a transformative approach to handling the computational and data-intensive demands of modern animation. This paper explores various aspects of EI in animation and then delves into the specifics of large models in animation, examining their evolution, current trends, and the inherent challenges in their implementation. Finally, the paper addresses the challenges and solutions in integrating EI with large models in animation, proposing future research directions. This survey serves as a valuable resource for researchers, animators, and technologists, offering insights into the potential of EI in revolutionizing animation design and opening new avenues for creative and efficient animation production.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"51 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139922235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-19DOI: 10.1186/s13677-024-00604-0
Meiyan Li, Qinyong Wang, Yuwei Liao
Automatic target tracking in emerging remote sensing video-generating tools based on microwave imaging technology and radars has been investigated in this paper. A moving target tracking system is proposed to be low complexity and fast for implementation through edge nodes in a mini-satellite or drone network enabling machine intelligence into large-scale vision systems, in particular, for marine transportation systems. The system uses a group of image processing tools for video pre-processing, and Kalman filtering to do the main task. For testing the system performance, two measures of accuracy and false alarms probability are computed for real vision data. Two types of scenes are analyzed including the scene with single target, and the scene with multiple targets that is more complicated for automatic target detection and tracking systems. The proposed system has achieved a high performance in our tests.
{"title":"Target tracking using video surveillance for enabling machine vision services at the edge of marine transportation systems based on microwave remote sensing","authors":"Meiyan Li, Qinyong Wang, Yuwei Liao","doi":"10.1186/s13677-024-00604-0","DOIUrl":"https://doi.org/10.1186/s13677-024-00604-0","url":null,"abstract":"Automatic target tracking in emerging remote sensing video-generating tools based on microwave imaging technology and radars has been investigated in this paper. A moving target tracking system is proposed to be low complexity and fast for implementation through edge nodes in a mini-satellite or drone network enabling machine intelligence into large-scale vision systems, in particular, for marine transportation systems. The system uses a group of image processing tools for video pre-processing, and Kalman filtering to do the main task. For testing the system performance, two measures of accuracy and false alarms probability are computed for real vision data. Two types of scenes are analyzed including the scene with single target, and the scene with multiple targets that is more complicated for automatic target detection and tracking systems. The proposed system has achieved a high performance in our tests.\u0000","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139910348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-17DOI: 10.1186/s13677-024-00610-2
Yanal Alahmad, Anjali Agarwal
Ensuring application service availability is a critical aspect of delivering quality cloud computing services. However, placing virtual machines (VMs) on computing servers to provision these services can present significant challenges, particularly in terms of meeting the requirements of application service providers. In this paper, we present a framework that addresses the NP-hard dynamic VM placement problem in order to optimize application availability in cloud computing paradigm. The problem is modeled as an integer nonlinear programming (INLP) optimization with multiple objectives and constraints. The framework comprises three major modules that use optimization methods and algorithms to determine the most effective VM placement strategy in cases of application deployment, failure, and scaling. Our primary goals are to minimize power consumption, resource waste, and server failures while also ensuring that application availability requirements are met. We compare our proposed heuristic VM placement solution with three related algorithms from the literature and find that it outperforms them in several key areas. Our solution is able to admit more applications, reduce power consumption, and increase CPU and RAM utilization of the servers. Moreover, we use a deep learning method that has high accuracy and low error loss to predict application task failures, allowing for proactive protection actions to reduce service outage. Overall, our framework provides a comprehensive solution by optimizing dynamic VM placement. Therefore, the framework can improve the quality of cloud computing services and enhance the experience for users.
确保应用服务可用性是提供优质云计算服务的一个重要方面。然而,在计算服务器上放置虚拟机(VM)以提供这些服务会带来巨大挑战,尤其是在满足应用服务提供商的要求方面。在本文中,我们提出了一个框架来解决 NP 难度的动态虚拟机放置问题,以优化云计算范例中的应用可用性。该问题被建模为具有多个目标和约束条件的整数非线性编程(INLP)优化。该框架由三个主要模块组成,使用优化方法和算法来确定应用部署、故障和扩展情况下最有效的虚拟机放置策略。我们的主要目标是最大限度地减少能耗、资源浪费和服务器故障,同时确保满足应用程序的可用性要求。我们将所提出的启发式虚拟机放置解决方案与文献中的三种相关算法进行了比较,发现它在几个关键方面优于它们。我们的解决方案能够接纳更多应用,降低功耗,提高服务器的 CPU 和 RAM 利用率。此外,我们还使用了一种深度学习方法,该方法预测应用任务故障的准确性高、误差损失小,可采取主动保护措施,减少服务中断。总之,我们的框架通过优化动态虚拟机放置提供了一个全面的解决方案。因此,该框架可以提高云计算服务的质量,增强用户体验。
{"title":"Multiple objectives dynamic VM placement for application service availability in cloud networks","authors":"Yanal Alahmad, Anjali Agarwal","doi":"10.1186/s13677-024-00610-2","DOIUrl":"https://doi.org/10.1186/s13677-024-00610-2","url":null,"abstract":"Ensuring application service availability is a critical aspect of delivering quality cloud computing services. However, placing virtual machines (VMs) on computing servers to provision these services can present significant challenges, particularly in terms of meeting the requirements of application service providers. In this paper, we present a framework that addresses the NP-hard dynamic VM placement problem in order to optimize application availability in cloud computing paradigm. The problem is modeled as an integer nonlinear programming (INLP) optimization with multiple objectives and constraints. The framework comprises three major modules that use optimization methods and algorithms to determine the most effective VM placement strategy in cases of application deployment, failure, and scaling. Our primary goals are to minimize power consumption, resource waste, and server failures while also ensuring that application availability requirements are met. We compare our proposed heuristic VM placement solution with three related algorithms from the literature and find that it outperforms them in several key areas. Our solution is able to admit more applications, reduce power consumption, and increase CPU and RAM utilization of the servers. Moreover, we use a deep learning method that has high accuracy and low error loss to predict application task failures, allowing for proactive protection actions to reduce service outage. Overall, our framework provides a comprehensive solution by optimizing dynamic VM placement. Therefore, the framework can improve the quality of cloud computing services and enhance the experience for users.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"117 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139762596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud computing provides outsourcing of computing services at a lower cost, making it a popular choice for many businesses. In recent years, cloud data storage has gained significant success, thanks to its advantages in maintenance, performance, support, cost, and reliability compared to traditional storage methods. However, despite the benefits of disaster recovery, scalability, and resource backup, some organizations still prefer traditional data storage over cloud storage due to concerns about data correctness and security. Data integrity is a critical issue in cloud computing, as data owners need to rely on third-party cloud storage providers to handle their data. To address this, researchers have been developing new algorithms for data integrity strategies in cloud storage to enhance security and ensure the accuracy of outsourced data. This article aims to highlight the security issues and possible attacks on cloud storage, as well as discussing the phases, characteristics, and classification of data integrity strategies. A comparative analysis of these strategies in the context of cloud storage is also presented. Furthermore, the overhead parameters of auditing system models in cloud computing are examined, considering the desired design goals. By understanding and addressing these factors, organizations can make informed decisions about their cloud storage solutions, taking into account both security and performance considerations.
{"title":"Investigation on storage level data integrity strategies in cloud computing: classification, security obstructions, challenges and vulnerability","authors":"Paromita Goswami, Neetu Faujdar, Somen Debnath, Ajoy Kumar Khan, Ghanshyam Singh","doi":"10.1186/s13677-024-00605-z","DOIUrl":"https://doi.org/10.1186/s13677-024-00605-z","url":null,"abstract":"Cloud computing provides outsourcing of computing services at a lower cost, making it a popular choice for many businesses. In recent years, cloud data storage has gained significant success, thanks to its advantages in maintenance, performance, support, cost, and reliability compared to traditional storage methods. However, despite the benefits of disaster recovery, scalability, and resource backup, some organizations still prefer traditional data storage over cloud storage due to concerns about data correctness and security. Data integrity is a critical issue in cloud computing, as data owners need to rely on third-party cloud storage providers to handle their data. To address this, researchers have been developing new algorithms for data integrity strategies in cloud storage to enhance security and ensure the accuracy of outsourced data. This article aims to highlight the security issues and possible attacks on cloud storage, as well as discussing the phases, characteristics, and classification of data integrity strategies. A comparative analysis of these strategies in the context of cloud storage is also presented. Furthermore, the overhead parameters of auditing system models in cloud computing are examined, considering the desired design goals. By understanding and addressing these factors, organizations can make informed decisions about their cloud storage solutions, taking into account both security and performance considerations.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"53 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139762499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the rapid development of the Internet of Medical Things (IoMT) and the increasing concern for personal health, sharing Electronic Medical Record (EMR) data is widely recognized as a crucial method for enhancing the quality of care and reducing healthcare expenses. EMRs are often shared to ensure accurate diagnosis, predict prognosis, and provide health advice. However, the process of sharing EMRs always raises significant concerns about potential security issues and breaches of privacy. Previous research has demonstrated that centralized cloud-based EMR systems are at high risk, e.g., single points of failure, denial of service (DoS) attacks, and insider attacks. With this motivation, we propose an EMR sharing scheme based on a consortium blockchain that is designed to prioritize both security and privacy. The interplanetary file system (IPFS) is used to store the encrypted EMR while the returned hash addresses are recorded on the blockchain. Then, the user can authorize other users to decrypt the EMR ciphertext via the proxy re-encryption algorithm, ensuring that only authorized personnel may access the files. Moreover, the scheme attains personalized access control and guarantees privacy protection by employing attribute-based access control. The safety analysis shows that the designed scheme meets the expected design goals. Security analysis and performance evaluation show that the scheme outperforms the comparison schemes in terms of computation and communication costs.
{"title":"A secure and efficient electronic medical record data sharing scheme based on blockchain and proxy re-encryption","authors":"Guijiang Liu, Haibo Xie, Wenming Wang, Haiping Huang","doi":"10.1186/s13677-024-00608-w","DOIUrl":"https://doi.org/10.1186/s13677-024-00608-w","url":null,"abstract":"With the rapid development of the Internet of Medical Things (IoMT) and the increasing concern for personal health, sharing Electronic Medical Record (EMR) data is widely recognized as a crucial method for enhancing the quality of care and reducing healthcare expenses. EMRs are often shared to ensure accurate diagnosis, predict prognosis, and provide health advice. However, the process of sharing EMRs always raises significant concerns about potential security issues and breaches of privacy. Previous research has demonstrated that centralized cloud-based EMR systems are at high risk, e.g., single points of failure, denial of service (DoS) attacks, and insider attacks. With this motivation, we propose an EMR sharing scheme based on a consortium blockchain that is designed to prioritize both security and privacy. The interplanetary file system (IPFS) is used to store the encrypted EMR while the returned hash addresses are recorded on the blockchain. Then, the user can authorize other users to decrypt the EMR ciphertext via the proxy re-encryption algorithm, ensuring that only authorized personnel may access the files. Moreover, the scheme attains personalized access control and guarantees privacy protection by employing attribute-based access control. The safety analysis shows that the designed scheme meets the expected design goals. Security analysis and performance evaluation show that the scheme outperforms the comparison schemes in terms of computation and communication costs.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139762390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Smart Grid (SG) heavily depends on the Advanced Metering Infrastructure (AMI) technology, which has shown its vulnerability to intrusions. To effectively monitor and raise alarms in response to anomalous activities, the Intrusion Detection System (IDS) plays a crucial role. However, existing intrusion detection models are typically trained on cloud servers, which exposes user data to significant privacy risks and extends the time required for intrusion detection. Training a high-quality IDS using Artificial Intelligence (AI) technologies on a single entity becomes particularly challenging when dealing with vast amounts of distributed data across the network. To address these concerns, this paper presents a novel approach: a fog-edge-enabled Support Vector Machine (SVM)-based federated learning (FL) IDS for SGs. FL is an AI technique for training Edge devices. In this system, only learning parameters are shared with the global model, ensuring the utmost data privacy while enabling collaborative learning to develop a high-quality IDS model. The test and validation results obtained from this proposed model demonstrate its superiority over existing methods, achieving an impressive percentage improvement of 4.17% accuracy, 13.19% recall, 9.63% precision, 13.19% F1 score when evaluated using the NSL-KDD dataset. Furthermore, the model performed exceptionally well on the CICIDS2017 dataset, with improved accuracy, precision, recall, and F1 scores reaching 6.03%, 6.03%, 7.57%, and 7.08%, respectively. This novel approach enhances intrusion detection accuracy and safeguards user data and privacy in SG systems, making it a significant advancement in the field.
{"title":"A fog-edge-enabled intrusion detection system for smart grids","authors":"Noshina Tariq, Amjad Alsirhani, Mamoona Humayun, Faeiz Alserhani, Momina Shaheen","doi":"10.1186/s13677-024-00609-9","DOIUrl":"https://doi.org/10.1186/s13677-024-00609-9","url":null,"abstract":"The Smart Grid (SG) heavily depends on the Advanced Metering Infrastructure (AMI) technology, which has shown its vulnerability to intrusions. To effectively monitor and raise alarms in response to anomalous activities, the Intrusion Detection System (IDS) plays a crucial role. However, existing intrusion detection models are typically trained on cloud servers, which exposes user data to significant privacy risks and extends the time required for intrusion detection. Training a high-quality IDS using Artificial Intelligence (AI) technologies on a single entity becomes particularly challenging when dealing with vast amounts of distributed data across the network. To address these concerns, this paper presents a novel approach: a fog-edge-enabled Support Vector Machine (SVM)-based federated learning (FL) IDS for SGs. FL is an AI technique for training Edge devices. In this system, only learning parameters are shared with the global model, ensuring the utmost data privacy while enabling collaborative learning to develop a high-quality IDS model. The test and validation results obtained from this proposed model demonstrate its superiority over existing methods, achieving an impressive percentage improvement of 4.17% accuracy, 13.19% recall, 9.63% precision, 13.19% F1 score when evaluated using the NSL-KDD dataset. Furthermore, the model performed exceptionally well on the CICIDS2017 dataset, with improved accuracy, precision, recall, and F1 scores reaching 6.03%, 6.03%, 7.57%, and 7.08%, respectively. This novel approach enhances intrusion detection accuracy and safeguards user data and privacy in SG systems, making it a significant advancement in the field.\u0000","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"38 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139762598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-14DOI: 10.1186/s13677-024-00606-y
Mohammad Zunnun Khan, Mohd Shoaib, Mohd Shahid Husain, Khair Ul Nisa, Mohammad. Tabrez Quasim
Cloud computing is a new paradigm in this new cyber era. Nowadays, most organizations are showing more reliability in this environment. The increasing reliability of the Cloud also makes it vulnerable. As vulnerability increases, there will be a greater need for privacy in terms of data, and utilizing secure services is highly recommended. So, data on the Cloud must have some privacy mechanisms to ensure personal and organizational privacy. So, for this, we must have an authentic way to increase the trust and reliability of the organization and individuals The authors have tried to create a way to rank things that uses the Analytical Hieratical Process (AHP) and the Technique for Order Preference by Similarity to the Ideal Solution (TOPSIS). Based on the result and comparison, produce some hidden advantages named cost, benefit, risk and opportunity-based outcomes of the result. In this paper, we are developing a cloud data privacy model; for this, we have done an intensive literature review by including Privacy factors such as Access Control, Authentication, Authorization, Trustworthiness, Confidentiality, Integrity, and Availability. Based on that review, we have chosen a few parameters that affect cloud data privacy in all the phases of the data life cycle. Most of the already available methods must be revised per the industry’s current trends. Here, we will use Analytical Hieratical Process and Technique for Order Preference by Similarity to the Ideal Solution method to prove that our claim is better than other cloud data privacy models. In this paper, the author has selected the weights of the individual cloud data privacy criteria and further calculated the rank of individual data privacy criteria using the AHP method and subsequently utilized the final weights as input of the TOPSIS method to rank the cloud data privacy criteria.
{"title":"Enhanced mechanism to prioritize the cloud data privacy factors using AHP and TOPSIS: a hybrid approach","authors":"Mohammad Zunnun Khan, Mohd Shoaib, Mohd Shahid Husain, Khair Ul Nisa, Mohammad. Tabrez Quasim","doi":"10.1186/s13677-024-00606-y","DOIUrl":"https://doi.org/10.1186/s13677-024-00606-y","url":null,"abstract":"Cloud computing is a new paradigm in this new cyber era. Nowadays, most organizations are showing more reliability in this environment. The increasing reliability of the Cloud also makes it vulnerable. As vulnerability increases, there will be a greater need for privacy in terms of data, and utilizing secure services is highly recommended. So, data on the Cloud must have some privacy mechanisms to ensure personal and organizational privacy. So, for this, we must have an authentic way to increase the trust and reliability of the organization and individuals The authors have tried to create a way to rank things that uses the Analytical Hieratical Process (AHP) and the Technique for Order Preference by Similarity to the Ideal Solution (TOPSIS). Based on the result and comparison, produce some hidden advantages named cost, benefit, risk and opportunity-based outcomes of the result. In this paper, we are developing a cloud data privacy model; for this, we have done an intensive literature review by including Privacy factors such as Access Control, Authentication, Authorization, Trustworthiness, Confidentiality, Integrity, and Availability. Based on that review, we have chosen a few parameters that affect cloud data privacy in all the phases of the data life cycle. Most of the already available methods must be revised per the industry’s current trends. Here, we will use Analytical Hieratical Process and Technique for Order Preference by Similarity to the Ideal Solution method to prove that our claim is better than other cloud data privacy models. In this paper, the author has selected the weights of the individual cloud data privacy criteria and further calculated the rank of individual data privacy criteria using the AHP method and subsequently utilized the final weights as input of the TOPSIS method to rank the cloud data privacy criteria.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"100 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139762510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-13DOI: 10.1186/s13677-024-00603-1
Junyan Chen, Wei Xiao, Hongmei Zhang, Jiacheng Zuo, Xinmei Li
Optimizing resource allocation and routing to satisfy service needs is paramount in large-scale networks. Software-defined networking (SDN) is a new network paradigm that decouples forwarding and control, enabling dynamic management and configuration through programming, which provides the possibility for deploying intelligent control algorithms (such as deep reinforcement learning algorithms) to solve network routing optimization problems in the network. Although these intelligent-based network routing optimization schemes can capture network state characteristics, they are prone to falling into local optima, resulting in poor convergence performance. In order to address this issue, this paper proposes an African Vulture Routing Optimization (AVRO) algorithm for achieving SDN routing optimization. AVRO is based on the African Vulture Optimization Algorithm (AVOA), a population-based metaheuristic intelligent optimization algorithm with global optimization ability and fast convergence speed advantages. First, we improve the population initialization method of the AVOA algorithm according to the characteristics of the network routing problem to enhance the algorithm’s perception capability towards network topology. Subsequently, we add an optimization phase to strengthen the development of the AVOA algorithm and achieve stable convergence effects. Finally, we model the network environment, define the network optimization objective, and perform comparative experiments with the baseline algorithms. The experimental results demonstrate that the routing algorithm has better network awareness, with a performance improvement of 16.9% compared to deep reinforcement learning algorithms and 71.8% compared to traditional routing schemes.
在大规模网络中,优化资源分配和路由选择以满足服务需求至关重要。软件定义网络(SDN)是一种新的网络范式,它将转发和控制解耦,通过编程实现动态管理和配置,这为部署智能控制算法(如深度强化学习算法)来解决网络中的网络路由优化问题提供了可能。虽然这些基于智能的网络路由优化方案可以捕捉网络状态特征,但容易陷入局部最优,导致收敛性能不佳。针对这一问题,本文提出了一种非洲秃鹫路由优化(AVRO)算法,用于实现 SDN 路由优化。AVRO 基于非洲秃鹫优化算法(AVOA),是一种基于种群的元启发式智能优化算法,具有全局优化能力强、收敛速度快等优点。首先,我们根据网络路由问题的特点,改进了 AVOA 算法的种群初始化方法,增强了算法对网络拓扑的感知能力。其次,增加优化阶段,加强 AVOA 算法的发展,实现稳定的收敛效果。最后,我们建立了网络环境模型,定义了网络优化目标,并与基准算法进行了对比实验。实验结果表明,该路由算法具有更好的网络感知能力,与深度强化学习算法相比性能提高了 16.9%,与传统路由方案相比性能提高了 71.8%。
{"title":"Dynamic routing optimization in software-defined networking based on a metaheuristic algorithm","authors":"Junyan Chen, Wei Xiao, Hongmei Zhang, Jiacheng Zuo, Xinmei Li","doi":"10.1186/s13677-024-00603-1","DOIUrl":"https://doi.org/10.1186/s13677-024-00603-1","url":null,"abstract":"Optimizing resource allocation and routing to satisfy service needs is paramount in large-scale networks. Software-defined networking (SDN) is a new network paradigm that decouples forwarding and control, enabling dynamic management and configuration through programming, which provides the possibility for deploying intelligent control algorithms (such as deep reinforcement learning algorithms) to solve network routing optimization problems in the network. Although these intelligent-based network routing optimization schemes can capture network state characteristics, they are prone to falling into local optima, resulting in poor convergence performance. In order to address this issue, this paper proposes an African Vulture Routing Optimization (AVRO) algorithm for achieving SDN routing optimization. AVRO is based on the African Vulture Optimization Algorithm (AVOA), a population-based metaheuristic intelligent optimization algorithm with global optimization ability and fast convergence speed advantages. First, we improve the population initialization method of the AVOA algorithm according to the characteristics of the network routing problem to enhance the algorithm’s perception capability towards network topology. Subsequently, we add an optimization phase to strengthen the development of the AVOA algorithm and achieve stable convergence effects. Finally, we model the network environment, define the network optimization objective, and perform comparative experiments with the baseline algorithms. The experimental results demonstrate that the routing algorithm has better network awareness, with a performance improvement of 16.9% compared to deep reinforcement learning algorithms and 71.8% compared to traditional routing schemes.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139762509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}