Pub Date : 2023-12-04DOI: 10.1007/s10723-023-09710-w
Caijun Cheng, Huazhen Huang
The financial system has reached its pinnacle because of economic and social growth, which has propelled the financial sector into another era. Public and corporate financial investment operations have significantly risen in this climate, and they now play a significant part in and impact the efficient use of market money. This finance sector will be affected by high-risk occurrences because of the cohabitation of dangers and passions, which will cause order to become unstable and definite financial losses. An organization’s operational risk is a significant barrier to its growth. A bit of negligence could cause the business’s standing to erode rapidly. Increasing funding management and forecasting risks is essential for the successful development of companies, enhancing their competitiveness in the marketplace and minimizing negative effects. As a result, this study takes the idea of mobile edge computing. It creates an intelligent system that can forecast different risks throughout the financial investment process based on the operational knowledge of important investment platforms. The CNN-LSTM approach, based on knowledge graphs, is then used to forecast financial risks. The results are then thoroughly examined through tests, demonstrating that the methodology can accurately estimate the risk associated with financial investments. Finally, a plan for improving the system for predicting financial risk is put out.
{"title":"Smart Financial Investor’s Risk Prediction System Using Mobile Edge Computing","authors":"Caijun Cheng, Huazhen Huang","doi":"10.1007/s10723-023-09710-w","DOIUrl":"https://doi.org/10.1007/s10723-023-09710-w","url":null,"abstract":"<p>The financial system has reached its pinnacle because of economic and social growth, which has propelled the financial sector into another era. Public and corporate financial investment operations have significantly risen in this climate, and they now play a significant part in and impact the efficient use of market money. This finance sector will be affected by high-risk occurrences because of the cohabitation of dangers and passions, which will cause order to become unstable and definite financial losses. An organization’s operational risk is a significant barrier to its growth. A bit of negligence could cause the business’s standing to erode rapidly. Increasing funding management and forecasting risks is essential for the successful development of companies, enhancing their competitiveness in the marketplace and minimizing negative effects. As a result, this study takes the idea of mobile edge computing. It creates an intelligent system that can forecast different risks throughout the financial investment process based on the operational knowledge of important investment platforms. The CNN-LSTM approach, based on knowledge graphs, is then used to forecast financial risks. The results are then thoroughly examined through tests, demonstrating that the methodology can accurately estimate the risk associated with financial investments. Finally, a plan for improving the system for predicting financial risk is put out.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138536971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-04DOI: 10.1007/s10723-023-09708-4
Xiaohu Gao, Mei Choo Ang, Sara A. Althubiti
Mobile Edge Computing (MEC) offers cloud-like capabilities to mobile users, making it an up-and-coming method for advancing the Internet of Things (IoT). However, current approaches are limited by various factors such as network latency, bandwidth, energy consumption, task characteristics, and edge server overload. To address these limitations, this research propose a novel approach that integrates Deep Reinforcement Learning (DRL) with Deep Deterministic Policy Gradient (DDPG) and Markov Decision Problem for task offloading in MEC. Among DRL algorithms, the ITODDPG algorithm based on the DDPG algorithm and MDP is a popular choice for task offloading in MEC. Firstly, the ITODDPG algorithm formulates the task offloading problem in MEC as an MDP, which enables the agent to learn a policy that maximizes the expected cumulative reward. Secondly, ITODDPG employs a deep neural network to approximate the Q-function, which maps the state-action pairs to their expected cumulative rewards. Finally, the experimental results demonstrate that the ITODDPG algorithm outperforms the baseline algorithms regarding average compensation and convergence speed. In addition to its superior performance, our proposed approach can learn complex non-linear policies using DNN and an information-theoretic objective function to improve the performance of task offloading in MEC. Compared to traditional methods, our approach delivers improved performance, making it highly effective for developing IoT environments. Experimental trials were carried out, and the results indicate that the suggested approach can enhance performance compared to the other three baseline methods. It is highly scalable, capable of handling large and complex environments, and suitable for deployment in real-world scenarios, ensuring its widespread applicability to a diverse range of task offloading and MEC applications.
{"title":"Deep Reinforcement Learning and Markov Decision Problem for Task Offloading in Mobile Edge Computing","authors":"Xiaohu Gao, Mei Choo Ang, Sara A. Althubiti","doi":"10.1007/s10723-023-09708-4","DOIUrl":"https://doi.org/10.1007/s10723-023-09708-4","url":null,"abstract":"<p>Mobile Edge Computing (MEC) offers cloud-like capabilities to mobile users, making it an up-and-coming method for advancing the Internet of Things (IoT). However, current approaches are limited by various factors such as network latency, bandwidth, energy consumption, task characteristics, and edge server overload. To address these limitations, this research propose a novel approach that integrates Deep Reinforcement Learning (DRL) with Deep Deterministic Policy Gradient (DDPG) and Markov Decision Problem for task offloading in MEC. Among DRL algorithms, the ITODDPG algorithm based on the DDPG algorithm and MDP is a popular choice for task offloading in MEC. Firstly, the ITODDPG algorithm formulates the task offloading problem in MEC as an MDP, which enables the agent to learn a policy that maximizes the expected cumulative reward. Secondly, ITODDPG employs a deep neural network to approximate the Q-function, which maps the state-action pairs to their expected cumulative rewards. Finally, the experimental results demonstrate that the ITODDPG algorithm outperforms the baseline algorithms regarding average compensation and convergence speed. In addition to its superior performance, our proposed approach can learn complex non-linear policies using DNN and an information-theoretic objective function to improve the performance of task offloading in MEC. Compared to traditional methods, our approach delivers improved performance, making it highly effective for developing IoT environments. Experimental trials were carried out, and the results indicate that the suggested approach can enhance performance compared to the other three baseline methods. It is highly scalable, capable of handling large and complex environments, and suitable for deployment in real-world scenarios, ensuring its widespread applicability to a diverse range of task offloading and MEC applications.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138537018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Leveraging a cloud computing environment for executing workflow applications offers high flexibility and strong scalability, thereby significantly improving resource utilization. Current scholarly discussions heavily focus on effectively reducing the scheduling length (makespan) of parallel task sets and improving the efficiency of large workflow applications in cloud computing environments. Effectively managing task dependencies and execution sequences plays a crucial role in designing efficient workflow scheduling algorithms. This study forwards a high-efficiency workflow scheduling algorithm based on predict makespan matrix (PMMS) for heterogeneous cloud computing environments. First, PMMS calculates the priority of each task based on the predict makespan (PM) matrix and obtains the task scheduling list. Second, the optimistic scheduling length (OSL) value of each task is calculated based on the PM matrix and the earliest finish time. Third, the best virtual machine is selected for each task according to the minimum OSL value. A large number of substantial experiments show that the scheduling length of workflow for PMMS, compared with state-of-the-art HEFT, PEFT, and PPTS algorithms, is reduced by 6.84%–15.17%, 5.47%–11.39%, and 4.74%–17.27%, respectively. This hinges on the premise of ensuring priority constraints and not increasing the time complexity.
{"title":"Efficient Prediction of Makespan Matrix Workflow Scheduling Algorithm for Heterogeneous Cloud Environments","authors":"Longxin Zhang, Minghui Ai, Runti Tan, Junfeng Man, Xiaojun Deng, Keqin Li","doi":"10.1007/s10723-023-09711-9","DOIUrl":"https://doi.org/10.1007/s10723-023-09711-9","url":null,"abstract":"<p>Leveraging a cloud computing environment for executing workflow applications offers high flexibility and strong scalability, thereby significantly improving resource utilization. Current scholarly discussions heavily focus on effectively reducing the scheduling length (makespan) of parallel task sets and improving the efficiency of large workflow applications in cloud computing environments. Effectively managing task dependencies and execution sequences plays a crucial role in designing efficient workflow scheduling algorithms. This study forwards a high-efficiency workflow scheduling algorithm based on predict makespan matrix (PMMS) for heterogeneous cloud computing environments. First, PMMS calculates the priority of each task based on the predict makespan (PM) matrix and obtains the task scheduling list. Second, the optimistic scheduling length (OSL) value of each task is calculated based on the PM matrix and the earliest finish time. Third, the best virtual machine is selected for each task according to the minimum OSL value. A large number of substantial experiments show that the scheduling length of workflow for PMMS, compared with state-of-the-art HEFT, PEFT, and PPTS algorithms, is reduced by 6.84%–15.17%, 5.47%–11.39%, and 4.74%–17.27%, respectively. This hinges on the premise of ensuring priority constraints and not increasing the time complexity.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138536984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-27DOI: 10.1007/s10723-023-09704-8
Yuting Zhong, Zesheng Qin, Abdulmajeed Alqhatani, Ahmed Sayed M. Metwally, Ashit Kumar Dutta, Joel J. P. C. Rodrigues
Smart cities and urbanization use enormous IoT devices to transfer data for analysis and information processing. These IoT can relate to billions of devices and transfer essential data from their surroundings. There is a massive need for energy because of the tremendous data exchange between billions of gadgets. Green IoT aims to make the environment a better place while lowering the power usage of IoT devices. In this work, a hybrid deep learning method called "Green energy-efficient routing (GEER) with long short-term memory deep Q-Network is used to minimize the energy consumption of devices. Initially, a GEER with Ant Colony Optimization (ACO) and AutoEncoder (AE) provides efficient routing between devices in the network. Next, the long short-term memory deep Q-Network based Reinforcement Learning (RL) method reduces the energy consumption of IoT devices. This hybrid approach leverages the strengths of each technique to address different aspects of energy-efficient routing. ACO and AE contribute to efficient routing decisions, while LSTM DQN optimizes energy consumption, resulting in a well-rounded solution. Finally, the proposed GELSDQN-ACO method is compared with previous methods such as RNN-LSTM, DPC-DBN, and LSTM-DQN. Moreover, we critically analyze the green IoT and perform implementation and evaluation.
{"title":"Sustainable Environmental Design Using Green IOT with Hybrid Deep Learning and Building Algorithm for Smart City","authors":"Yuting Zhong, Zesheng Qin, Abdulmajeed Alqhatani, Ahmed Sayed M. Metwally, Ashit Kumar Dutta, Joel J. P. C. Rodrigues","doi":"10.1007/s10723-023-09704-8","DOIUrl":"https://doi.org/10.1007/s10723-023-09704-8","url":null,"abstract":"<p>Smart cities and urbanization use enormous IoT devices to transfer data for analysis and information processing. These IoT can relate to billions of devices and transfer essential data from their surroundings. There is a massive need for energy because of the tremendous data exchange between billions of gadgets. Green IoT aims to make the environment a better place while lowering the power usage of IoT devices. In this work, a hybrid deep learning method called \"Green energy-efficient routing (GEER) with long short-term memory deep Q-Network is used to minimize the energy consumption of devices. Initially, a GEER with Ant Colony Optimization (ACO) and AutoEncoder (AE) provides efficient routing between devices in the network. Next, the long short-term memory deep Q-Network based Reinforcement Learning (RL) method reduces the energy consumption of IoT devices. This hybrid approach leverages the strengths of each technique to address different aspects of energy-efficient routing. ACO and AE contribute to efficient routing decisions, while LSTM DQN optimizes energy consumption, resulting in a well-rounded solution. Finally, the proposed GELSDQN-ACO method is compared with previous methods such as RNN-LSTM, DPC-DBN, and LSTM-DQN. Moreover, we critically analyze the green IoT and perform implementation and evaluation.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138537030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-27DOI: 10.1007/s10723-023-09713-7
Matineh ZargarAzad, Mehrdad Ashtiani
Recently, microservices have become a commonly-used architectural pattern for building cloud-native applications. Cloud computing provides flexibility for service providers, allowing them to remove or add resources depending on the workload of their web applications. If the resources allocated to the service are not aligned with its requirements, instances of failure or delayed response will increase, resulting in customer dissatisfaction. This problem has become a significant challenge in microservices-based applications, because thousands of microservices in the system may have complex interactions. Auto-scaling is a feature of cloud computing that enables resource scalability on demand, thus allowing service providers to deliver resources to their applications without human intervention under a dynamic workload to minimize resource cost and latency while maintaining the quality of service requirements. In this research, we aimed to establish a computational model for analyzing the workload of all microservices. To this end, the overall workload entering the system was considered, and the relationships and function calls between microservices were taken into account, because in a large-scale application with thousands of microservices, accurately monitoring all microservices and gathering precise performance metrics are usually difficult. Then, we developed a multi-criteria decision-making method to select the candidate microservices for scaling. We have tested the proposed approach with three datasets. The results of the conducted experiments show that the detection of input load toward microservices is performed with an average accuracy of about 99% which is a notable result. Furthermore, the proposed approach has demonstrated a substantial enhancement in resource utilization, achieving an average improvement of 40.74%, 20.28%, and 28.85% across three distinct datasets in comparison to existing methods. This is achieved by a notable reduction in the number of scaling operations, reducing the count by 54.40%, 55.52%, and 69.82%, respectively. Consequently, this optimization translates into a decrease in required resources, leading to cost reductions of 1.64%, 1.89%, and 1.67% respectively.
{"title":"An Auto-Scaling Approach for Microservices in Cloud Computing Environments","authors":"Matineh ZargarAzad, Mehrdad Ashtiani","doi":"10.1007/s10723-023-09713-7","DOIUrl":"https://doi.org/10.1007/s10723-023-09713-7","url":null,"abstract":"<p>Recently, microservices have become a commonly-used architectural pattern for building cloud-native applications. Cloud computing provides flexibility for service providers, allowing them to remove or add resources depending on the workload of their web applications. If the resources allocated to the service are not aligned with its requirements, instances of failure or delayed response will increase, resulting in customer dissatisfaction. This problem has become a significant challenge in microservices-based applications, because thousands of microservices in the system may have complex interactions. Auto-scaling is a feature of cloud computing that enables resource scalability on demand, thus allowing service providers to deliver resources to their applications without human intervention under a dynamic workload to minimize resource cost and latency while maintaining the quality of service requirements. In this research, we aimed to establish a computational model for analyzing the workload of all microservices. To this end, the overall workload entering the system was considered, and the relationships and function calls between microservices were taken into account, because in a large-scale application with thousands of microservices, accurately monitoring all microservices and gathering precise performance metrics are usually difficult. Then, we developed a multi-criteria decision-making method to select the candidate microservices for scaling. We have tested the proposed approach with three datasets. The results of the conducted experiments show that the detection of input load toward microservices is performed with an average accuracy of about 99% which is a notable result. Furthermore, the proposed approach has demonstrated a substantial enhancement in resource utilization, achieving an average improvement of 40.74%, 20.28%, and 28.85% across three distinct datasets in comparison to existing methods. This is achieved by a notable reduction in the number of scaling operations, reducing the count by 54.40%, 55.52%, and 69.82%, respectively. Consequently, this optimization translates into a decrease in required resources, leading to cost reductions of 1.64%, 1.89%, and 1.67% respectively.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138536995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-27DOI: 10.1007/s10723-023-09694-7
Mohammad Aknan, Maheshwari Prasad Singh, Rajeev Arya
The role of Internet of Things (IoT) applications has increased tremendously in several areas like healthcare, agriculture, academia, industries, transportation, smart cities, etc. to make human life better. The number of IoT devices is increasing exponentially, and generating huge amounts of data that IoT nodes cannot handle. The centralized cloud architecture can process this enormous IoT data but fails to offer quality of service (QoS) due to high transmission latency, network congestion, and bandwidth. The fog paradigm has evolved that bring computing resources at the network edge for offering services to latency-sensitive IoT applications. Still, offloading decision, heterogeneous fog network, diverse workload, security issues, energy consumption, and expected QoS is significant challenges in this area. Hence, we have proposed a Blockchain-enabled Intelligent framework to tackle the mentioned issues and allocate the optimal resources for upcoming IoT requests in a collaborative cloud fog environment. The proposed framework is integrated with an Artificial Intelligence (AI) based meta-heuristic algorithm that has a high convergence rate, and the capability to take the offloading decision at run time, leading to improved results quality. Blockchain technology secures IoT applications and their data from modern attacks. The experimental results of the proposed framework exhibit significant improvement by up to 20% in execution time and cost and up to 18% in energy consumption over other meta-heuristic approaches under similar experimental environments.
{"title":"AI and Blockchain Assisted Framework for Offloading and Resource Allocation in Fog Computing","authors":"Mohammad Aknan, Maheshwari Prasad Singh, Rajeev Arya","doi":"10.1007/s10723-023-09694-7","DOIUrl":"https://doi.org/10.1007/s10723-023-09694-7","url":null,"abstract":"<p>The role of Internet of Things (IoT) applications has increased tremendously in several areas like healthcare, agriculture, academia, industries, transportation, smart cities, etc. to make human life better. The number of IoT devices is increasing exponentially, and generating huge amounts of data that IoT nodes cannot handle. The centralized cloud architecture can process this enormous IoT data but fails to offer quality of service (QoS) due to high transmission latency, network congestion, and bandwidth. The fog paradigm has evolved that bring computing resources at the network edge for offering services to latency-sensitive IoT applications. Still, offloading decision, heterogeneous fog network, diverse workload, security issues, energy consumption, and expected QoS is significant challenges in this area. Hence, we have proposed a Blockchain-enabled Intelligent framework to tackle the mentioned issues and allocate the optimal resources for upcoming IoT requests in a collaborative cloud fog environment. The proposed framework is integrated with an Artificial Intelligence (AI) based meta-heuristic algorithm that has a high convergence rate, and the capability to take the offloading decision at run time, leading to improved results quality. Blockchain technology secures IoT applications and their data from modern attacks. The experimental results of the proposed framework exhibit significant improvement by up to 20% in execution time and cost and up to 18% in energy consumption over other meta-heuristic approaches under similar experimental environments.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138536988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-22DOI: 10.1007/s10723-023-09707-5
Shuangshuang Zhang, Yue Tang, Dinghui Wang, Noorliza Karia, Chenguang Wang
Health monitoring systems (HMS) with wearable IoT devices are constantly being developed and improved. But most of these gadgets have limited energy and processing power due to resource constraints. Mobile edge computing (MEC) must be used to analyze the HMS information to decrease bandwidth usage and increase reaction times for applications that depend on latency and require intense computation. To achieve these needs while considering emergencies under HMS, this work offers an effective task planning and allocation of resources mechanism in MEC. Utilizing the Software Denied Network (SDN) framework; we provide a priority-aware semi-greedy with genetic algorithm (PSG-GA) method. It prioritizes tasks differently by considering their emergencies, calculated concerning the data collected from a patient’s smart wearable devices. The process can determine whether a job must be completed domestically at the hospital workstations (HW) or in the cloud. The goal is to minimize both the bandwidth cost and the overall task processing time. Existing techniques were compared to the proposed SD-PSGA regarding average latency, job scheduling effectiveness, execution duration, bandwidth consumption, CPU utilization, and power usage. The testing results are encouraging since SD-PSGA can handle emergencies and fulfill the task’s latency-sensitive requirements at a lower bandwidth cost. The accuracy of testing model achieves 97 to 98% for nearly 200 tasks.
{"title":"Secured SDN Based Task Scheduling in Edge Computing for Smart City Health Monitoring Operation Management System","authors":"Shuangshuang Zhang, Yue Tang, Dinghui Wang, Noorliza Karia, Chenguang Wang","doi":"10.1007/s10723-023-09707-5","DOIUrl":"https://doi.org/10.1007/s10723-023-09707-5","url":null,"abstract":"<p>Health monitoring systems (HMS) with wearable IoT devices are constantly being developed and improved. But most of these gadgets have limited energy and processing power due to resource constraints. Mobile edge computing (MEC) must be used to analyze the HMS information to decrease bandwidth usage and increase reaction times for applications that depend on latency and require intense computation. To achieve these needs while considering emergencies under HMS, this work offers an effective task planning and allocation of resources mechanism in MEC. Utilizing the Software Denied Network (SDN) framework; we provide a priority-aware semi-greedy with genetic algorithm (PSG-GA) method. It prioritizes tasks differently by considering their emergencies, calculated concerning the data collected from a patient’s smart wearable devices. The process can determine whether a job must be completed domestically at the hospital workstations (HW) or in the cloud. The goal is to minimize both the bandwidth cost and the overall task processing time. Existing techniques were compared to the proposed SD-PSGA regarding average latency, job scheduling effectiveness, execution duration, bandwidth consumption, CPU utilization, and power usage. The testing results are encouraging since SD-PSGA can handle emergencies and fulfill the task’s latency-sensitive requirements at a lower bandwidth cost. The accuracy of testing model achieves 97 to 98% for nearly 200 tasks.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2023-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138536974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-21DOI: 10.1007/s10723-023-09705-7
Jizhou Li, Qi Wang, Shuai Hu, Ling Li
The adoption of User Equipment (UE) is on the rise, driven by advancements in Mobile Cloud Computing (MCC), Mobile Edge Computing (MEC), the Internet of Things (IoT), and Artificial Intelligence (AI). Among these, MEC stands out as a pivotal aspect of the 5G network. A critical challenge within the realm of MEC is task offloading. This involves optimizing conflicting factors like execution time, energy usage, and computation duration. Additionally, addressing the offloading of interdependent tasks poses another significant hurdle that requires attention. The developed models are single objective, task dependency, and computationally expensive. As a result, the Immune whale differential evolution optimization algorithm is proposed to offload the dependent tasks to the MEC with three objectives: minimizing the execution delay and reducing the energy and cost of MEC resources. The standard Whale optimization is incorporated with DE with customized mutation operations and immune system to enhance the searching strategy of Whale optimization. The proposed HIWDEO secured reduced energy and overhead of UE to execute its tasks. The comparison between the developed model and other optimization approaches shows the superiority of HIWDEO.
{"title":"Hybrid Immune Whale Differential Evolution Optimization (HIWDEO) Based Computation Offloading in MEC for IoT","authors":"Jizhou Li, Qi Wang, Shuai Hu, Ling Li","doi":"10.1007/s10723-023-09705-7","DOIUrl":"https://doi.org/10.1007/s10723-023-09705-7","url":null,"abstract":"<p>The adoption of User Equipment (UE) is on the rise, driven by advancements in Mobile Cloud Computing (MCC), Mobile Edge Computing (MEC), the Internet of Things (IoT), and Artificial Intelligence (AI). Among these, MEC stands out as a pivotal aspect of the 5G network. A critical challenge within the realm of MEC is task offloading. This involves optimizing conflicting factors like execution time, energy usage, and computation duration. Additionally, addressing the offloading of interdependent tasks poses another significant hurdle that requires attention. The developed models are single objective, task dependency, and computationally expensive. As a result, the Immune whale differential evolution optimization algorithm is proposed to offload the dependent tasks to the MEC with three objectives: minimizing the execution delay and reducing the energy and cost of MEC resources. The standard Whale optimization is incorporated with DE with customized mutation operations and immune system to enhance the searching strategy of Whale optimization. The proposed HIWDEO secured reduced energy and overhead of UE to execute its tasks. The comparison between the developed model and other optimization approaches shows the superiority of HIWDEO.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2023-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138537033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-17DOI: 10.1007/s10723-023-09699-2
Charu Awasthi, Prashant Kumar Mishra, Pawan Kumar Pal, Surbhi Bhatia Khan, Ambuj Kumar Agarwal, Thippa Reddy Gadekallu, Areej A. Malibari
The proliferation of IoT devices has influenced end users in several aspects. Yottabytes (YB) of information are being produced in the IoT environs because of the ever-increasing utilization capacity of the Internet. Since sensitive information, as well as privacy problems, always seem to be an unsolved problem, even with best-in-class in-formation governance standards, it is difficult to bolster defensive security capabilities. Secure data sharing across disparate systems is made possible by blockchain technology, which operates on a decentralized computing paradigm. In the ever-changing IoT environments, blockchain technology provides irreversibility (immutability) usage across a wide range of services and use cases. Therefore, blockchain technology can be leveraged to securely hold private information, even in the dynamicity context of the IoT. However, as the rate of change in IoT networks accelerates, every potential weak point in the system is exposed, making it more challenging to keep sensitive data se-cure. In this study, we adopted a Multi-level Blockchain-based Secured Framework (M-BSF) to provide multi-level protection for sensitive data in the face of threats to IoT-based networking systems. The envisioned M-BSF framework incorporates edge-level, fog-level, and cloud-level security. At edge- and fog-level security, baby kyber and scaling kyber cryptosystems are applied to ensure data preservation. Kyber is a cryptosystem scheme that adopts public-key encryption and private-key decryption processes. Each block of the blockchain uses the cloud-based Argon-2di hashing method for cloud-level data storage, providing the highest level of confidentiality. Argon-2di is a stable hashing algorithm that uses a hybrid approach to access the memory that relied on dependent and independent memory features. Based on the attack-resistant rate (> 96%), computational cost (in time), and other main metrics, the proposed M-BSF security architecture appears to be an acceptable alternative to the current methodologies.
{"title":"Preservation of Sensitive Data Using Multi-Level Blockchain-based Secured Framework for Edge Network Devices","authors":"Charu Awasthi, Prashant Kumar Mishra, Pawan Kumar Pal, Surbhi Bhatia Khan, Ambuj Kumar Agarwal, Thippa Reddy Gadekallu, Areej A. Malibari","doi":"10.1007/s10723-023-09699-2","DOIUrl":"https://doi.org/10.1007/s10723-023-09699-2","url":null,"abstract":"<p>The proliferation of IoT devices has influenced end users in several aspects. Yottabytes (YB) of information are being produced in the IoT environs because of the ever-increasing utilization capacity of the Internet. Since sensitive information, as well as privacy problems, always seem to be an unsolved problem, even with best-in-class in-formation governance standards, it is difficult to bolster defensive security capabilities. Secure data sharing across disparate systems is made possible by blockchain technology, which operates on a decentralized computing paradigm. In the ever-changing IoT environments, blockchain technology provides irreversibility (immutability) usage across a wide range of services and use cases. Therefore, blockchain technology can be leveraged to securely hold private information, even in the dynamicity context of the IoT. However, as the rate of change in IoT networks accelerates, every potential weak point in the system is exposed, making it more challenging to keep sensitive data se-cure. In this study, we adopted a Multi-level Blockchain-based Secured Framework (M-BSF) to provide multi-level protection for sensitive data in the face of threats to IoT-based networking systems. The envisioned M-BSF framework incorporates edge-level, fog-level, and cloud-level security. At edge- and fog-level security, baby kyber and scaling kyber cryptosystems are applied to ensure data preservation. Kyber is a cryptosystem scheme that adopts public-key encryption and private-key decryption processes. Each block of the blockchain uses the cloud-based Argon-2di hashing method for cloud-level data storage, providing the highest level of confidentiality. Argon-2di is a stable hashing algorithm that uses a hybrid approach to access the memory that relied on dependent and independent memory features. Based on the attack-resistant rate (> 96%), computational cost (in time), and other main metrics, the proposed M-BSF security architecture appears to be an acceptable alternative to the current methodologies.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2023-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138536985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}