Smart cities cannot function without autonomous devices that connect wirelessly and enable cellular connectivity and processing. Edge computing bridges mobile devices and the cloud, giving mobile devices access to computing, memory, and communication capabilities via vehicular ad hoc networks (VANET). VANET is a time-constrained technology that can handle requests from vehicles in a shorter amount of time. The most well-known problems with edge computing and VANET are latency and delay. Any congestion or ineffectiveness in this network can result in latency, which affects its overall efficiency. The data processing in smart city affected by latency can produce irregular decision making. Some data, like traffics, congestions needs to be addressed in time. Delay decision making can make application failure and results in wrong information processing. In this study, we created a probability-based hybrid Whale -Dragonfly Optimization (p–H-WDFOA) edge computing model for smart urban vehicle transportation that lowers the delay and latency of edge computing to address such issues. The 5G localized Multi-Access Edge Computing (MEC) servers were additionally employed, significantly reducing the wait and the latency to enhance the edge technology resources and meet the latency and Quality of Service (QoS) criteria. Compared to an experiment employing a pure cloud computing architecture, we reduced data latency by 20%. We also reduced processing time by 35% compared to cloud computing architecture. The proposed method, WDFO-VANET, improves energy consumption and minimizes the communication costs of VANET.
{"title":"Smart City Transportation: A VANET Edge Computing Model to Minimize Latency and Delay Utilizing 5G Network","authors":"Mengqi Wang, Jiayuan Mao, Wei Zhao, Xinya Han, Mengya Li, Chuanjun Liao, Haomiao Sun, Kexin Wang","doi":"10.1007/s10723-024-09747-5","DOIUrl":"https://doi.org/10.1007/s10723-024-09747-5","url":null,"abstract":"<p>Smart cities cannot function without autonomous devices that connect wirelessly and enable cellular connectivity and processing. Edge computing bridges mobile devices and the cloud, giving mobile devices access to computing, memory, and communication capabilities via vehicular ad hoc networks (VANET). VANET is a time-constrained technology that can handle requests from vehicles in a shorter amount of time. The most well-known problems with edge computing and VANET are latency and delay. Any congestion or ineffectiveness in this network can result in latency, which affects its overall efficiency. The data processing in smart city affected by latency can produce irregular decision making. Some data, like traffics, congestions needs to be addressed in time. Delay decision making can make application failure and results in wrong information processing. In this study, we created a probability-based hybrid Whale -Dragonfly Optimization (p–H-WDFOA) edge computing model for smart urban vehicle transportation that lowers the delay and latency of edge computing to address such issues. The 5G localized Multi-Access Edge Computing (MEC) servers were additionally employed, significantly reducing the wait and the latency to enhance the edge technology resources and meet the latency and Quality of Service (QoS) criteria. Compared to an experiment employing a pure cloud computing architecture, we reduced data latency by 20%. We also reduced processing time by 35% compared to cloud computing architecture. The proposed method, WDFO-VANET, improves energy consumption and minimizes the communication costs of VANET.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139760777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-05DOI: 10.1007/s10723-023-09728-0
S. Bajpai, A. Patankar
{"title":"Marine Goal Optimizer Tuned Deep BiLSTM-Based Self-Configuring Intrusion Detection in Cloud","authors":"S. Bajpai, A. Patankar","doi":"10.1007/s10723-023-09728-0","DOIUrl":"https://doi.org/10.1007/s10723-023-09728-0","url":null,"abstract":"","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139683142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-02DOI: 10.1007/s10723-023-09737-z
Wenxia Ye
E-commerce is a growing industry that primarily relies on websites to provide services and products to businesses and customers. As a brand-new international trade, cross-border e-commerce offers numerous benefits, including increased accessibility. Even though cross-border e-commerce has a bright future, managing the global supply chain is crucial to surviving the competitive pressure and growing steadily. Traditional purchase volume forecasting uses time-series data and a straightforward prediction methodology. Numerous customer consumption habits, including the number of products or services, product collections, and taxpayer subsidies, influence the platform's sale quantity. The use of the EC supply chain has expanded significantly in the past few years because of the economy's recent rapid growth. The proposed method develops a Short-Term Demand-based Deep Neural Network and Cold Supply Chain Optimization method for predicting commodity purchase volume. The deep neural network technique suggests a cold supply chain demand forecasting framework centred on multilayer Bayesian networks (BNN) to forecast the short-term demand for e-commerce goods. The cold supply chain (CS) optimisation method determines the optimised management inventory. The research findings demonstrate that this study considers various influencing factors and chooses an appropriate forecasting technique. The proposed method outperforms 96.35% of Accuracy, 97% of Precision and 94.89% of Recall.
{"title":"E-Commerce Logistics and Supply Chain Network Optimization for Cross-Border","authors":"Wenxia Ye","doi":"10.1007/s10723-023-09737-z","DOIUrl":"https://doi.org/10.1007/s10723-023-09737-z","url":null,"abstract":"<p>E-commerce is a growing industry that primarily relies on websites to provide services and products to businesses and customers. As a brand-new international trade, cross-border e-commerce offers numerous benefits, including increased accessibility. Even though cross-border e-commerce has a bright future, managing the global supply chain is crucial to surviving the competitive pressure and growing steadily. Traditional purchase volume forecasting uses time-series data and a straightforward prediction methodology. Numerous customer consumption habits, including the number of products or services, product collections, and taxpayer subsidies, influence the platform's sale quantity. The use of the EC supply chain has expanded significantly in the past few years because of the economy's recent rapid growth. The proposed method develops a Short-Term Demand-based Deep Neural Network and Cold Supply Chain Optimization method for predicting commodity purchase volume. The deep neural network technique suggests a cold supply chain demand forecasting framework centred on multilayer Bayesian networks (BNN) to forecast the short-term demand for e-commerce goods. The cold supply chain (CS) optimisation method determines the optimised management inventory. The research findings demonstrate that this study considers various influencing factors and chooses an appropriate forecasting technique. The proposed method outperforms 96.35% of Accuracy, 97% of Precision and 94.89% of Recall.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139666458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wireless Medical Sensor Network (WMSN) is a kind of Ad-hoc Network that is used in the health sector to continuously monitor patients’ health conditions and provide instant medical services, over a distance. This network facilitates the transmission of real-time patient data, sensed by resource-constrained biosensors, to the end user through an open communication channel. Thus, any modification or alteration in such sensed physiological data leads to the wrong diagnosis which may put the life of the patient in danger. Therefore, among many challenges in WMSN, the security is most essential requirement that needs to be addressed. Hence, to maintain the security and privacy of sensitive medical data, this article proposed a lightweight mutual authentication and key agreement (AKA) scheme using Physical Unclonable Functions (PUFs) enabled sensor nodes. Moreover, to make the WMSN more secure and reliable, the physiological data like the electrocardiogram (ECG) of the patients are also considered. In order to establish its accuracy and security, the scheme undergoes validation through the Real or Random (RoR) Model and is further confirmed through simulation using the Automated Validation of Internet Security Protocols and Applications (AVISPA) tool. A thorough examination encompassing security, performance, and a comparative assessment with existing related schemes illustrates that the proposed scheme not only exhibits superior resistance to well-known attacks in comparison to others but also upholds a cost-effective strategy at the sensor node, specifically, a reduction of 35.71% in computational cost and 49.12% in communication cost.
{"title":"A Combined Approach of PUF and Physiological Data for Mutual Authentication and Key Agreement in WMSN","authors":"Shanvendra Rai, Rituparna Paul, Subhasish Banerjee, Preetisudha Meher, Gulab Sah","doi":"10.1007/s10723-023-09731-5","DOIUrl":"https://doi.org/10.1007/s10723-023-09731-5","url":null,"abstract":"<p>Wireless Medical Sensor Network (WMSN) is a kind of Ad-hoc Network that is used in the health sector to continuously monitor patients’ health conditions and provide instant medical services, over a distance. This network facilitates the transmission of real-time patient data, sensed by resource-constrained biosensors, to the end user through an open communication channel. Thus, any modification or alteration in such sensed physiological data leads to the wrong diagnosis which may put the life of the patient in danger. Therefore, among many challenges in WMSN, the security is most essential requirement that needs to be addressed. Hence, to maintain the security and privacy of sensitive medical data, this article proposed a lightweight mutual authentication and key agreement (AKA) scheme using Physical Unclonable Functions (PUFs) enabled sensor nodes. Moreover, to make the WMSN more secure and reliable, the physiological data like the electrocardiogram (ECG) of the patients are also considered. In order to establish its accuracy and security, the scheme undergoes validation through the Real or Random (RoR) Model and is further confirmed through simulation using the Automated Validation of Internet Security Protocols and Applications (AVISPA) tool. A thorough examination encompassing security, performance, and a comparative assessment with existing related schemes illustrates that the proposed scheme not only exhibits superior resistance to well-known attacks in comparison to others but also upholds a cost-effective strategy at the sensor node, specifically, a reduction of 35.71% in computational cost and 49.12% in communication cost.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139666826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-29DOI: 10.1007/s10723-023-09730-6
Xinyu Zhang, Zhigang Hu, Yang Liang, Hui Xiao, Aikun Xu, Meiguang Zheng, Chuan Sun
In the era of ubiquitous network devices, an exponential increase in content requests from user equipment (UE) calls for optimized caching strategies within a cloud-edge integration. This approach is critical to handling large numbers of requests. To enhance caching efficiency, federated deep reinforcement learning (FDRL) is widely used to adjust caching policies. Nonetheless, for improved adaptability in dynamic scenarios, FDRL generally demands extended and online deep training, incurring a notable energy overhead when contrasted with rule-based approaches. With the aim of achieving a harmony between caching efficiency and training energy expenditure, we integrate a content request latency model, a deep reinforcement learning model based on markov decision processes (MDP), and a two-stage training energy consumption model. Together, these components define a new average delay and training energy gain (ADTEG) challenge. To address this challenge, we put forth a innovative dynamic federated optimization strategy. This approach refines the pre-training phase through the use of cluster-based strategies and parameter transfer methodologies. The online training phase is improved through a dynamic federated framework and an adaptive local iteration count. The experimental findings affirm that our proposed methodology reduces the training energy outlay while maintaining caching efficacy.
{"title":"A Federated Deep Reinforcement Learning-based Low-power Caching Strategy for Cloud-edge Collaboration","authors":"Xinyu Zhang, Zhigang Hu, Yang Liang, Hui Xiao, Aikun Xu, Meiguang Zheng, Chuan Sun","doi":"10.1007/s10723-023-09730-6","DOIUrl":"https://doi.org/10.1007/s10723-023-09730-6","url":null,"abstract":"<p>In the era of ubiquitous network devices, an exponential increase in content requests from user equipment (UE) calls for optimized caching strategies within a cloud-edge integration. This approach is critical to handling large numbers of requests. To enhance caching efficiency, federated deep reinforcement learning (FDRL) is widely used to adjust caching policies. Nonetheless, for improved adaptability in dynamic scenarios, FDRL generally demands extended and online deep training, incurring a notable energy overhead when contrasted with rule-based approaches. With the aim of achieving a harmony between caching efficiency and training energy expenditure, we integrate a content request latency model, a deep reinforcement learning model based on markov decision processes (MDP), and a two-stage training energy consumption model. Together, these components define a new average delay and training energy gain (ADTEG) challenge. To address this challenge, we put forth a innovative dynamic federated optimization strategy. This approach refines the pre-training phase through the use of cluster-based strategies and parameter transfer methodologies. The online training phase is improved through a dynamic federated framework and an adaptive local iteration count. The experimental findings affirm that our proposed methodology reduces the training energy outlay while maintaining caching efficacy.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139585756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-27DOI: 10.1007/s10723-023-09738-y
Abstract
Pallet racking is a critical element of the production, storage, and distribution networks businesses worldwide use. Ongoing inspections and maintenance are required to ensure the workforce's safety and the stock's protection. Currently, certified inspectors manually examine racks, which causes operational delays, service charges, and missing damages because of human error. As businesses move beyond smart manufacturing, we describe an automated racking assessment method utilizing an integrated framework, MobileNetV2-you only look once (YOLOv5). The proposed method examines the automated pallet tracking system and detects multiple damages based on edge platforms during pallet racking. It employs YOLOv5 in conjunction with the Block Development Mechanism (BDM), which detects defective pallet racks. We propose a device that attaches to the moveable cage of the forklift truck and provides adequate coverage for the neighboring racks. Also, we classify any damage as significant or minor so that floor supervisors can decide whether a replacement is necessary immediately in each circumstance. Instead of conducting annual or quarterly racking inspections, this would give the racking industry a way to continuously monitor the racking, creating a more secure workplace environment. Our suggested method generates a classifier tailored for installation onto edge devices, providing forklift operators.
{"title":"Automated Pallet Racking Examination in Edge Platform Based on MobileNetV2: Towards Smart Manufacturing","authors":"","doi":"10.1007/s10723-023-09738-y","DOIUrl":"https://doi.org/10.1007/s10723-023-09738-y","url":null,"abstract":"<h3>Abstract</h3> <p>Pallet racking is a critical element of the production, storage, and distribution networks businesses worldwide use. Ongoing inspections and maintenance are required to ensure the workforce's safety and the stock's protection. Currently, certified inspectors manually examine racks, which causes operational delays, service charges, and missing damages because of human error. As businesses move beyond smart manufacturing, we describe an automated racking assessment method utilizing an integrated framework, MobileNetV2-you only look once (YOLOv5). The proposed method examines the automated pallet tracking system and detects multiple damages based on edge platforms during pallet racking. It employs YOLOv5 in conjunction with the Block Development Mechanism (BDM), which detects defective pallet racks. We propose a device that attaches to the moveable cage of the forklift truck and provides adequate coverage for the neighboring racks. Also, we classify any damage as significant or minor so that floor supervisors can decide whether a replacement is necessary immediately in each circumstance. Instead of conducting annual or quarterly racking inspections, this would give the racking industry a way to continuously monitor the racking, creating a more secure workplace environment. Our suggested method generates a classifier tailored for installation onto edge devices, providing forklift operators.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139585567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-26DOI: 10.1007/s10723-024-09740-y
K Johny Elma, Praveena Rachel Kamala S, Saraswathi T
The evolutionary growth of Wireless Sensor Networks (WSN) exploits a wide range of applications. To deploy the WSN in a larger area, for sensing the environment, the accurate location of the node is a prerequisite. Owing to these traits, the WSN has been effectively implemented with devices. Using various localization techniques, the information related to node location is obtained for unknown nodes. Recently, node localization has employed the standard bio-inspired algorithm to sustain the fast convergence ability of WSN applications. Thus, this paper aims to develop a new hybrid optimization algorithm for solving the node localization problems among the unknown nodes in WSN. This hybrid optimization scheme is developed with two efficient heuristic strategies of Black Widow Optimization (BWO) and Honey Badger Algorithm (HBA), named as Hybridized Black Widow-Honey Badger Optimization (HBW-HBO) to achieve the objective of the framework. The main objective of the developed heuristic-based node localization framework is to minimize the localization error between the actual locations and detected locations of all nodes in WSN. For validating the developed heuristic-based node localization scheme in WSN, it is compared with different existing optimization strategies using different measures. The experimental analysis proves the robust and consistent node localization performance in WSN for the developed scheme than the other comparative algorithms.
{"title":"Hybridized Black Widow-Honey Badger Optimization: Swarm Intelligence Strategy for Node Localization Scheme in WSN","authors":"K Johny Elma, Praveena Rachel Kamala S, Saraswathi T","doi":"10.1007/s10723-024-09740-y","DOIUrl":"https://doi.org/10.1007/s10723-024-09740-y","url":null,"abstract":"<p>The evolutionary growth of Wireless Sensor Networks (WSN) exploits a wide range of applications. To deploy the WSN in a larger area, for sensing the environment, the accurate location of the node is a prerequisite. Owing to these traits, the WSN has been effectively implemented with devices. Using various localization techniques, the information related to node location is obtained for unknown nodes. Recently, node localization has employed the standard bio-inspired algorithm to sustain the fast convergence ability of WSN applications. Thus, this paper aims to develop a new hybrid optimization algorithm for solving the node localization problems among the unknown nodes in WSN. This hybrid optimization scheme is developed with two efficient heuristic strategies of Black Widow Optimization (BWO) and Honey Badger Algorithm (HBA), named as Hybridized Black Widow-Honey Badger Optimization (HBW-HBO) to achieve the objective of the framework. The main objective of the developed heuristic-based node localization framework is to minimize the localization error between the actual locations and detected locations of all nodes in WSN. For validating the developed heuristic-based node localization scheme in WSN, it is compared with different existing optimization strategies using different measures. The experimental analysis proves the robust and consistent node localization performance in WSN for the developed scheme than the other comparative algorithms.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139585430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-25DOI: 10.1007/s10723-023-09729-z
Ziyang Zhang, Keyu Gu, Zijie Xu
This paper focuses on the problem of computation offloading in a high-mobility Internet of Vehicles (IoVs) environment. The goal is to address the challenges related to latency, energy consumption, and payment cost requirements. The approach considers both moving and parked vehicles as fog nodes, which can assist in offloading computational tasks. However, as the number of vehicles increases, the action space for each agent grows exponentially, posing a challenge for decentralised decision-making. The dynamic nature of vehicular mobility further complicates the network dynamics, requiring joint cooperative behaviour from the learning agents to achieve convergence. The traditional deep reinforcement learning (DRL) approach for offloading in IoVs treats each agent as an independent learner. It ignores the actions of other agents during the training process. This paper utilises a cooperative three-layer decentralised architecture called Vehicle-Assisted Multi-Access Edge Computing (VMEC) to overcome this limitation. The VMEC network consists of three layers: the fog, cloudlet, and cloud layers. In the fog layer, vehicles within associated Roadside Units (RSUs) and neighbouring RSUs participate as fog nodes. The middle layer comprises Mobile Edge Computing (MEC) servers, while the top layer represents the cloud infrastructure. To address the dynamic task offloading problem in VMEC, the paper proposes using a Decentralized Framework of Task and Computational Offloading (DFTCO), which utilises the strength of MADRL and NOMA techniques. This approach considers multiple agents making offloading decisions simultaneously and aims to find the optimal matching between tasks and available resources.
{"title":"DRL-based Task and Computational Offloading for Internet of Vehicles in Decentralized Computing","authors":"Ziyang Zhang, Keyu Gu, Zijie Xu","doi":"10.1007/s10723-023-09729-z","DOIUrl":"https://doi.org/10.1007/s10723-023-09729-z","url":null,"abstract":"<p>This paper focuses on the problem of computation offloading in a high-mobility Internet of Vehicles (IoVs) environment. The goal is to address the challenges related to latency, energy consumption, and payment cost requirements. The approach considers both moving and parked vehicles as fog nodes, which can assist in offloading computational tasks. However, as the number of vehicles increases, the action space for each agent grows exponentially, posing a challenge for decentralised decision-making. The dynamic nature of vehicular mobility further complicates the network dynamics, requiring joint cooperative behaviour from the learning agents to achieve convergence. The traditional deep reinforcement learning (DRL) approach for offloading in IoVs treats each agent as an independent learner. It ignores the actions of other agents during the training process. This paper utilises a cooperative three-layer decentralised architecture called Vehicle-Assisted Multi-Access Edge Computing (VMEC) to overcome this limitation. The VMEC network consists of three layers: the fog, cloudlet, and cloud layers. In the fog layer, vehicles within associated Roadside Units (RSUs) and neighbouring RSUs participate as fog nodes. The middle layer comprises Mobile Edge Computing (MEC) servers, while the top layer represents the cloud infrastructure. To address the dynamic task offloading problem in VMEC, the paper proposes using a Decentralized Framework of Task and Computational Offloading (DFTCO), which utilises the strength of MADRL and NOMA techniques. This approach considers multiple agents making offloading decisions simultaneously and aims to find the optimal matching between tasks and available resources.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139560757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-23DOI: 10.1007/s10723-023-09732-4
Abstract
Autoscaling enables container cluster orchestrators to automatically adjust computational resources, such as containers and Virtual Machines (VMs), to handle fluctuating workloads effectively. This adaptation can involve modifying the amount of resources (horizontal scaling) or adjusting their computational capacity (vertical scaling). The motivation for our work stems from the limitations of previous autoscaling approaches, which are either partial (scaling containers or VMs, but not both) or excessively complex to be used in real systems. This complexity arises from their use of models with a large number of variables and the addressing of two simultaneous challenges: achieving the optimal deployment for a single scheduling window and managing the transition between successive scheduling windows. We propose an Integer Linear Programming (ILP) model to address the challenge of autoscaling containers and VMs jointly, both horizontally and vertically, to minimize deployment costs. This model is designed to be used with predictive autoscalers and be solved in a reasonable time, even for large clusters. To this end, improvements and reasonable simplifications with respect to previous models have been carried out to drastically reduce the size of the resource allocation problem. Furthermore, the proposed model provides an enhanced representation of system performance in comparison to previous approaches. A tool called Conlloovia has been developed to implement this model. To evaluate its performance, we have conducted a comprehensive assessment, comparing it with two heuristic allocators with different problem sizes. Our findings indicate that Conlloovia consistently demonstrates lower deployment costs in a significant number of cases. Conlloovia has also been evaluated with a real application, using synthetic and real workload traces, as well as different scheduling windows, with deployment costs approximately 20% lower than heuristic allocators.
{"title":"Joint Autoscaling of Containers and Virtual Machines for Cost Optimization in Container Clusters","authors":"","doi":"10.1007/s10723-023-09732-4","DOIUrl":"https://doi.org/10.1007/s10723-023-09732-4","url":null,"abstract":"<h3>Abstract</h3> <p>Autoscaling enables container cluster orchestrators to automatically adjust computational resources, such as containers and Virtual Machines (VMs), to handle fluctuating workloads effectively. This adaptation can involve modifying the amount of resources (horizontal scaling) or adjusting their computational capacity (vertical scaling). The motivation for our work stems from the limitations of previous autoscaling approaches, which are either partial (scaling containers or VMs, but not both) or excessively complex to be used in real systems. This complexity arises from their use of models with a large number of variables and the addressing of two simultaneous challenges: achieving the optimal deployment for a single scheduling window and managing the transition between successive scheduling windows. We propose an Integer Linear Programming (ILP) model to address the challenge of autoscaling containers and VMs jointly, both horizontally and vertically, to minimize deployment costs. This model is designed to be used with predictive autoscalers and be solved in a reasonable time, even for large clusters. To this end, improvements and reasonable simplifications with respect to previous models have been carried out to drastically reduce the size of the resource allocation problem. Furthermore, the proposed model provides an enhanced representation of system performance in comparison to previous approaches. A tool called Conlloovia has been developed to implement this model. To evaluate its performance, we have conducted a comprehensive assessment, comparing it with two heuristic allocators with different problem sizes. Our findings indicate that Conlloovia consistently demonstrates lower deployment costs in a significant number of cases. Conlloovia has also been evaluated with a real application, using synthetic and real workload traces, as well as different scheduling windows, with deployment costs approximately 20% lower than heuristic allocators.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139562459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-20DOI: 10.1007/s10723-023-09725-3
Xiedong Song, Qinmin Ma
Edge nodes, which are expected to grow into a multi-billion-dollar market, are essential for detection against a variety of cyber threats on Internet-of-Things endpoints. Adopting the current network intrusion detection system with deep learning models (DLM) based on FedACNN is constrained by the resource limitations of this network equipment layer. We solve this issue by creating a unique, lightweight, quick, and accurate edge detection model to identify DLM-based distributed denial service attacks on edge nodes. Our approach can generate real results at a relevant pace even with limited resources, such as low power, memory, and processing capabilities. The Federated Convolution Neural Network (FedACNN) deep learning method uses attention mechanisms to minimise communication delay. The developed model uses a recent cybersecurity dataset deployed on an edge node simulated by a Raspberry Pi (UNSW 2015). Our findings show that, compared to traditional DLM methodologies, our model retains a high accuracy rate of about 99%, even with decreased CPU and memory resource use. Also, it is about three times smaller in volume than the most advanced model while requiring a lot less testing time.
{"title":"Intrusion Detection using Federated Attention Neural Network for Edge Enabled Internet of Things","authors":"Xiedong Song, Qinmin Ma","doi":"10.1007/s10723-023-09725-3","DOIUrl":"https://doi.org/10.1007/s10723-023-09725-3","url":null,"abstract":"<p>Edge nodes, which are expected to grow into a multi-billion-dollar market, are essential for detection against a variety of cyber threats on Internet-of-Things endpoints. Adopting the current network intrusion detection system with deep learning models (DLM) based on FedACNN is constrained by the resource limitations of this network equipment layer. We solve this issue by creating a unique, lightweight, quick, and accurate edge detection model to identify DLM-based distributed denial service attacks on edge nodes. Our approach can generate real results at a relevant pace even with limited resources, such as low power, memory, and processing capabilities. The Federated Convolution Neural Network (FedACNN) deep learning method uses attention mechanisms to minimise communication delay. The developed model uses a recent cybersecurity dataset deployed on an edge node simulated by a Raspberry Pi (UNSW 2015). Our findings show that, compared to traditional DLM methodologies, our model retains a high accuracy rate of about 99%, even with decreased CPU and memory resource use. Also, it is about three times smaller in volume than the most advanced model while requiring a lot less testing time.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139509469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}