Pub Date : 2024-11-05DOI: 10.1016/j.jnca.2024.104049
Farkhondeh Kiaee , Ehsan Arianyan
In the recent years, emergence huge Edge-Cloud environments faces great challenges like the ever-increasing energy demand, the extensive Internet of Things (IoT) devices adaptation, and the goals of efficiency and reliability. Containers has become increasingly popular to encapsulate various services and container migration among Edge-Cloud nodes may enable new use cases in various IoT domains. In this study, an efficient joint VM and container consolidation solution is proposed for Edge-Cloud environment. The proposed method uses the Auto-Encoder (AE) and TOPSIS modules for two stages of consolidation subproblems, namely, Joint VM and Container Multi-criteria Migration Decision (AE-TOPSIS-JVCMMD) and Edge-Cloud Power SLA Aware (AE-TOPSIS-ECPSA) for VM placement. The module extracts the contribution of different criteria and computes the scores of all the alternatives. Combining the non-linear contribution learning ability of the AE algorithm and the intelligent ranking of the TOPSIS algorithm, the proposed method successfully avoids the bias of conventional multi-criteria approaches toward alternatives that have good evaluations in two or more dependent criteria. The simulations conducted using the Cloudsim simulator confirm the effectiveness of the proposed policies, demonstrating to 41.5%, 30.13%, 12.9%, 10.3%, 58.2% and 56.1% reductions in energy consumption, SLA violation, response time, running cost, number of VM migrations, and number of container migrations, respectively in comparison with state of the arts.
{"title":"Joint VM and container consolidation with auto-encoder based contribution extraction of decision criteria in Edge-Cloud environment","authors":"Farkhondeh Kiaee , Ehsan Arianyan","doi":"10.1016/j.jnca.2024.104049","DOIUrl":"10.1016/j.jnca.2024.104049","url":null,"abstract":"<div><div>In the recent years, emergence huge Edge-Cloud environments faces great challenges like the ever-increasing energy demand, the extensive Internet of Things (IoT) devices adaptation, and the goals of efficiency and reliability. Containers has become increasingly popular to encapsulate various services and container migration among Edge-Cloud nodes may enable new use cases in various IoT domains. In this study, an efficient joint VM and container consolidation solution is proposed for Edge-Cloud environment. The proposed method uses the Auto-Encoder (AE) and TOPSIS modules for two stages of consolidation subproblems, namely, Joint VM and Container Multi-criteria Migration Decision (AE-TOPSIS-JVCMMD) and Edge-Cloud Power SLA Aware (AE-TOPSIS-ECPSA) for VM placement. The module extracts the contribution of different criteria and computes the scores of all the alternatives. Combining the non-linear contribution learning ability of the AE algorithm and the intelligent ranking of the TOPSIS algorithm, the proposed method successfully avoids the bias of conventional multi-criteria approaches toward alternatives that have good evaluations in two or more dependent criteria. The simulations conducted using the Cloudsim simulator confirm the effectiveness of the proposed policies, demonstrating to 41.5%, 30.13%, 12.9%, 10.3%, 58.2% and 56.1% reductions in energy consumption, SLA violation, response time, running cost, number of VM migrations, and number of container migrations, respectively in comparison with state of the arts.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104049"},"PeriodicalIF":7.7,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Undoubtedly, blockchain technology has emerged as one of the most fascinating advancements in recent decades. Its rapid development has attracted a diverse range of experts from various fields. Over the past five years, numerous blockchains have been launched, hosting a multitude of applications with varying objectives. However, a key limitation of blockchain-based services and applications is their isolation within their respective host blockchains, preventing them from recording or accessing data from other blockchains. This limitation has spurred developers to explore solutions for connecting different blockchains without relying on centralized intermediaries. This new wave of projects, officially called Layer 3 projects (L3) initiatives, has introduced innovative concepts like cross-chain transactions, multi-chain frameworks, hyper-chains, and more. This study provides an overview of these significant concepts and L3 projects while categorizing them into interoperability and scalability solutions. We then discuss opportunities, challenges, and future horizons of L3 solutions and present a SWOT (Strengths–Weaknesses–Opportunities–Threats) analysis of the two groups of L3 solutions and all other proposals. As an important part, we introduce the concept of Universal decentralized finance (DeFi) as one the most exciting applications of L3s which decreases transaction costs, enhances the security of crowdfunding, and provides many improvements in distributed lending-borrowing processes. The final part of this study maps the blockchain’s triangle problem on L3s and identifies current challenges from the L3’s perspective. Ultimately, the future directions of L3 for both academic and industry sectors are discussed.
{"title":"Third layer blockchains are being rapidly developed: Addressing state-of-the-art paradigms and future horizons","authors":"Saeed Banaeian Far, Seyed Mojtaba Hosseini Bamakan","doi":"10.1016/j.jnca.2024.104044","DOIUrl":"10.1016/j.jnca.2024.104044","url":null,"abstract":"<div><div>Undoubtedly, blockchain technology has emerged as one of the most fascinating advancements in recent decades. Its rapid development has attracted a diverse range of experts from various fields. Over the past five years, numerous blockchains have been launched, hosting a multitude of applications with varying objectives. However, a key limitation of blockchain-based services and applications is their isolation within their respective host blockchains, preventing them from recording or accessing data from other blockchains. This limitation has spurred developers to explore solutions for connecting different blockchains without relying on centralized intermediaries. This new wave of projects, officially called Layer 3 projects (L3) initiatives, has introduced innovative concepts like cross-chain transactions, multi-chain frameworks, hyper-chains, and more. This study provides an overview of these significant concepts and L3 projects while categorizing them into interoperability and scalability solutions. We then discuss opportunities, challenges, and future horizons of L3 solutions and present a SWOT (Strengths–Weaknesses–Opportunities–Threats) analysis of the two groups of L3 solutions and all other proposals. As an important part, we introduce the concept of Universal decentralized finance (DeFi) as one the most exciting applications of L3s which decreases transaction costs, enhances the security of crowdfunding, and provides many improvements in distributed lending-borrowing processes. The final part of this study maps the blockchain’s triangle problem on L3s and identifies current challenges from the L3’s perspective. Ultimately, the future directions of L3 for both academic and industry sectors are discussed.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104044"},"PeriodicalIF":7.7,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142573473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-24DOI: 10.1016/j.jnca.2024.104047
Hao Peng , Yifan Zhao , Dandan Zhao , Bo Zhang , Cheng Qian , Ming Zhong , Jianmin Han , Xiaoyang Liu , Wei Wang
In real-world complex systems, most networks are interconnected with other networks through interlayer dependencies, forming multilayer interdependent networks. In each system, the interactions between nodes are not limited to pairwise but also exist in a higher-order interaction composed of three or more individuals, thus inducing a multilayer interdependent higher-order network (MIHN). First, we build four types of artificial MIHN models (i.e., chain-like, tree-like, star-like and ring-like), in which the higher-order interactions are described by the simplicial complexes, and the interlayer dependency is built via a one-to-one matching dependency link. Then, we propose a cascading failure model on MIHN and suggest a corresponding percolation-based theory to study the robustness of MIHN by investigating the giant connected components (GCC) and percolation threshold. We find that the density of the simplicial complexes and the number of layers of the network affect its penetration behavior. When the density of simplicial complexes exceeds a certain threshold, the network has a double transition, and the increase in network layers significantly enhances the vulnerability of MIHN. By comparing the simulation results of MIHNs with four types, we observe that under the same density of simplicial complexes, the size of the GCC is independent of the topological structures of MIHN after removing a certain number of nodes. Although the cascading failure process of MIHNs with different structures is different, the final results tend to be the same. We further analyze in detail the cascading failure process of MIHN with different structures and elucidate the factors influencing the speed of cascading failures. Among these four types of MIHNs, the chain-like MIHN has the slowest cascading failure rate and more stable robustness compared to the other three structures, followed by the tree-like MIHN and star-like MIHN. The ring-like MIHN has the fastest cascading failure rate and weakest robustness due to its ring structure. Finally, we give the time required for the MIHN with different structures to reach the stable state during the cascading failure process and find that the closer to the percolation threshold, the more time the network requires to reach the stable state.
{"title":"Robustness of multilayer interdependent higher-order network","authors":"Hao Peng , Yifan Zhao , Dandan Zhao , Bo Zhang , Cheng Qian , Ming Zhong , Jianmin Han , Xiaoyang Liu , Wei Wang","doi":"10.1016/j.jnca.2024.104047","DOIUrl":"10.1016/j.jnca.2024.104047","url":null,"abstract":"<div><div>In real-world complex systems, most networks are interconnected with other networks through interlayer dependencies, forming multilayer interdependent networks. In each system, the interactions between nodes are not limited to pairwise but also exist in a higher-order interaction composed of three or more individuals, thus inducing a multilayer interdependent higher-order network (MIHN). First, we build four types of artificial MIHN models (i.e., chain-like, tree-like, star-like and ring-like), in which the higher-order interactions are described by the simplicial complexes, and the interlayer dependency is built via a one-to-one matching dependency link. Then, we propose a cascading failure model on MIHN and suggest a corresponding percolation-based theory to study the robustness of MIHN by investigating the giant connected components (GCC) and percolation threshold. We find that the density of the simplicial complexes and the number of layers of the network affect its penetration behavior. When the density of simplicial complexes exceeds a certain threshold, the network has a double transition, and the increase in network layers significantly enhances the vulnerability of MIHN. By comparing the simulation results of MIHNs with four types, we observe that under the same density of simplicial complexes, the size of the GCC is independent of the topological structures of MIHN after removing a certain number of nodes. Although the cascading failure process of MIHNs with different structures is different, the final results tend to be the same. We further analyze in detail the cascading failure process of MIHN with different structures and elucidate the factors influencing the speed of cascading failures. Among these four types of MIHNs, the chain-like MIHN has the slowest cascading failure rate and more stable robustness compared to the other three structures, followed by the tree-like MIHN and star-like MIHN. The ring-like MIHN has the fastest cascading failure rate and weakest robustness due to its ring structure. Finally, we give the time required for the MIHN with different structures to reach the stable state during the cascading failure process and find that the closer to the percolation threshold, the more time the network requires to reach the stable state.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104047"},"PeriodicalIF":7.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142553101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-24DOI: 10.1016/j.jnca.2024.104045
Goshgar Ismayilov, Can Özturan
Blockchains are decentralized and immutable databases that are shared among the nodes of the network. Although blockchains have attracted a great scale of attention in the recent years by disrupting the traditional financial systems, the transaction privacy is still a challenging issue that needs to be addressed and analyzed. We propose a Private Token Transfer System (PTTS) for the Ethereum public blockchain in the first part of this paper. For the proposed framework, zero-knowledge based protocol has been designed using Zokrates and integrated into our private token smart contract. With the help of web user interface designed, the end users can interact with the smart contract without any third-party setup. In the second part of the paper, we provide security and privacy analysis including the replay attack and the balance range privacy attack which has been modeled as a network flow problem. It is shown that in case some balance ranges are deliberately leaked out to particular organizations or adversarial entities, it is possible to extract meaningful information about the user balances by employing minimum cost flow network algorithms that have polynomial complexity. The experimental study reports the Ethereum gas consumption and proof generation times for the proposed framework. It also reports network solution times and goodness rates for a subset of addresses under the balance range privacy attack with respect to number of addresses, number of transactions and ratio of leaked transfer transaction amounts.
{"title":"PTTS: Zero-knowledge proof-based private token transfer system on Ethereum blockchain and its network flow based balance range privacy attack analysis","authors":"Goshgar Ismayilov, Can Özturan","doi":"10.1016/j.jnca.2024.104045","DOIUrl":"10.1016/j.jnca.2024.104045","url":null,"abstract":"<div><div>Blockchains are decentralized and immutable databases that are shared among the nodes of the network. Although blockchains have attracted a great scale of attention in the recent years by disrupting the traditional financial systems, the transaction privacy is still a challenging issue that needs to be addressed and analyzed. We propose a <em>P</em>rivate <em>T</em>oken <em>T</em>ransfer <em>S</em>ystem (PTTS) for the Ethereum public blockchain in the first part of this paper. For the proposed framework, zero-knowledge based protocol has been designed using Zokrates and integrated into our private token smart contract. With the help of web user interface designed, the end users can interact with the smart contract without any third-party setup. In the second part of the paper, we provide security and privacy analysis including the replay attack and the balance range privacy attack which has been modeled as a network flow problem. It is shown that in case some balance ranges are deliberately leaked out to particular organizations or adversarial entities, it is possible to extract meaningful information about the user balances by employing minimum cost flow network algorithms that have polynomial complexity. The experimental study reports the Ethereum gas consumption and proof generation times for the proposed framework. It also reports network solution times and goodness rates for a subset of addresses under the balance range privacy attack with respect to number of addresses, number of transactions and ratio of leaked transfer transaction amounts.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104045"},"PeriodicalIF":7.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142573474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-19DOI: 10.1016/j.jnca.2024.104037
Shinu M. Rajagopal , Supriya M. , Rajkumar Buyya
Blockchain technology combined with Federated Learning (FL) offers a promising solution for enhancing privacy, security, and efficiency in medical IoT applications across edge, fog, and cloud computing environments. This approach enables multiple medical IoT devices at the network edge to collaboratively train a global machine learning model without sharing raw data, addressing privacy concerns associated with centralized data storage. This paper presents a blockchain and FL-based Smart Decision Making framework for ECG data in microservice-based IoT medical applications. Leveraging edge/fog computing for real-time critical applications, the framework implements a FL model across edge, fog, and cloud layers. Evaluation criteria including energy consumption, latency, execution time, cost, and network usage show that edge-based deployment outperforms fog and cloud, with significant advantages in energy consumption (0.1% vs. Fog, 0.9% vs. Cloud), network usage (1.1% vs. Fog, 31% vs. Cloud), cost (3% vs. Fog, 20% vs. Cloud), execution time (16% vs. Fog, 28% vs. Cloud), and latency (1% vs. Fog, 79% vs. Cloud).
{"title":"Leveraging blockchain and federated learning in Edge-Fog-Cloud computing environments for intelligent decision-making with ECG data in IoT","authors":"Shinu M. Rajagopal , Supriya M. , Rajkumar Buyya","doi":"10.1016/j.jnca.2024.104037","DOIUrl":"10.1016/j.jnca.2024.104037","url":null,"abstract":"<div><div>Blockchain technology combined with Federated Learning (FL) offers a promising solution for enhancing privacy, security, and efficiency in medical IoT applications across edge, fog, and cloud computing environments. This approach enables multiple medical IoT devices at the network edge to collaboratively train a global machine learning model without sharing raw data, addressing privacy concerns associated with centralized data storage. This paper presents a blockchain and FL-based Smart Decision Making framework for ECG data in microservice-based IoT medical applications. Leveraging edge/fog computing for real-time critical applications, the framework implements a FL model across edge, fog, and cloud layers. Evaluation criteria including energy consumption, latency, execution time, cost, and network usage show that edge-based deployment outperforms fog and cloud, with significant advantages in energy consumption (0.1% vs. Fog, 0.9% vs. Cloud), network usage (1.1% vs. Fog, 31% vs. Cloud), cost (3% vs. Fog, 20% vs. Cloud), execution time (16% vs. Fog, 28% vs. Cloud), and latency (1% vs. Fog, 79% vs. Cloud).</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104037"},"PeriodicalIF":7.7,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142553100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-16DOI: 10.1016/j.jnca.2024.104043
Yong Liu , Yuanhang Ge , Qian Meng , Quanze Liu
In traditional networks, the static configuration of devices increases the complexity of network management and limits the development of network functions. Software-Defined Networking (SDN) employs controllers to manage switches, thereby simplifying network management. However, with the expansion of network scale, the early single controller architecture gradually became a performance bottleneck for the entire network. To solve this problem, SDN uses multiple controllers to manage the network, which improves the scalability of the network. However, due to the dynamic change in network traffic, multi-controller architectures face the challenge of load imbalance among controllers. In recent years, researchers have proposed various novel load optimization strategies to improve resource utilization and the performance of SDN networks. This paper reviews load optimization strategies in SDN, including the latest research results in switch migration and controller placement. Subsequently, we analyze the advantages and disadvantages of existing load optimization strategies. Finally, we discuss the future development direction of the load optimization strategy.
在传统网络中,设备的静态配置增加了网络管理的复杂性,限制了网络功能的发展。软件定义网络(SDN)采用控制器管理交换机,从而简化了网络管理。然而,随着网络规模的扩大,早期的单一控制器架构逐渐成为整个网络的性能瓶颈。为解决这一问题,SDN 使用多个控制器来管理网络,从而提高了网络的可扩展性。然而,由于网络流量的动态变化,多控制器架构面临着控制器间负载不平衡的挑战。近年来,研究人员提出了各种新型负载优化策略,以提高 SDN 网络的资源利用率和性能。本文回顾了 SDN 中的负载优化策略,包括交换机迁移和控制器放置方面的最新研究成果。随后,我们分析了现有负载优化策略的优缺点。最后,我们讨论了负载优化策略的未来发展方向。
{"title":"Controller load optimization strategies in Software-Defined Networking: A survey","authors":"Yong Liu , Yuanhang Ge , Qian Meng , Quanze Liu","doi":"10.1016/j.jnca.2024.104043","DOIUrl":"10.1016/j.jnca.2024.104043","url":null,"abstract":"<div><div>In traditional networks, the static configuration of devices increases the complexity of network management and limits the development of network functions. Software-Defined Networking (SDN) employs controllers to manage switches, thereby simplifying network management. However, with the expansion of network scale, the early single controller architecture gradually became a performance bottleneck for the entire network. To solve this problem, SDN uses multiple controllers to manage the network, which improves the scalability of the network. However, due to the dynamic change in network traffic, multi-controller architectures face the challenge of load imbalance among controllers. In recent years, researchers have proposed various novel load optimization strategies to improve resource utilization and the performance of SDN networks. This paper reviews load optimization strategies in SDN, including the latest research results in switch migration and controller placement. Subsequently, we analyze the advantages and disadvantages of existing load optimization strategies. Finally, we discuss the future development direction of the load optimization strategy.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104043"},"PeriodicalIF":7.7,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142527121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-14DOI: 10.1016/j.jnca.2024.104040
Muhammad Sajjad Akbar , Zawar Hussain , Muhammad Ikram , Quan Z. Sheng , Subhas Chandra Mukhopadhyay
Fifth-generation (5G) wireless networks are likely to offer high data rates, increased reliability, and low delay for mobile, personal, and local area networks. Along with the rapid growth of smart wireless sensing and communication technologies, data traffic has increased significantly and existing 5G networks are not able to fully support future massive data traffic for services, storage, and processing. To meet the challenges that are ahead, both research communities and industry are exploring the sixth generation (6G) Terahertz-based wireless network that is expected to be offered to industrial users in just ten years. Gaining knowledge and understanding of the different challenges and facets of 6G is crucial in meeting the requirements of future communication and addressing evolving quality of service (QoS) demands. This survey provides a comprehensive examination of specifications, requirements, applications, and enabling technologies related to 6G. It covers disruptive and innovative, integration of 6G with advanced architectures and networks such as software-defined networks (SDN), network functions virtualization (NFV), Cloud/Fog computing, and Artificial Intelligence (AI) oriented technologies. The survey also addresses privacy and security concerns and provides potential futuristic use cases such as virtual reality, smart healthcare, and Industry 5.0. Furthermore, it identifies the current challenges and outlines future research directions to facilitate the deployment of 6G networks.
{"title":"On challenges of sixth-generation (6G) wireless networks: A comprehensive survey of requirements, applications, and security issues","authors":"Muhammad Sajjad Akbar , Zawar Hussain , Muhammad Ikram , Quan Z. Sheng , Subhas Chandra Mukhopadhyay","doi":"10.1016/j.jnca.2024.104040","DOIUrl":"10.1016/j.jnca.2024.104040","url":null,"abstract":"<div><div>Fifth-generation (5G) wireless networks are likely to offer high data rates, increased reliability, and low delay for mobile, personal, and local area networks. Along with the rapid growth of smart wireless sensing and communication technologies, data traffic has increased significantly and existing 5G networks are not able to fully support future massive data traffic for services, storage, and processing. To meet the challenges that are ahead, both research communities and industry are exploring the sixth generation (6G) Terahertz-based wireless network that is expected to be offered to industrial users in just ten years. Gaining knowledge and understanding of the different challenges and facets of 6G is crucial in meeting the requirements of future communication and addressing evolving quality of service (QoS) demands. This survey provides a comprehensive examination of specifications, requirements, applications, and enabling technologies related to 6G. It covers disruptive and innovative, integration of 6G with advanced architectures and networks such as software-defined networks (SDN), network functions virtualization (NFV), Cloud/Fog computing, and Artificial Intelligence (AI) oriented technologies. The survey also addresses privacy and security concerns and provides potential futuristic use cases such as virtual reality, smart healthcare, and Industry 5.0. Furthermore, it identifies the current challenges and outlines future research directions to facilitate the deployment of 6G networks.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104040"},"PeriodicalIF":7.7,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142445657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-10DOI: 10.1016/j.jnca.2024.104042
Mina Emami Khansari, Saeed Sharifian
Serverless computing has emerged as a new cloud computing model which in contrast to IoT offers unlimited and scalable access to resources. This paradigm improves resource utilization, cost, scalability and resource management specifically in terms of irregular incoming traffic. While cloud computing has been known as a reliable computing and storage solution to host IoT applications, it is not suitable for bandwidth limited, real time and secure applications. Therefore, shifting the resources of the cloud-edge continuum towards the edge can mitigate these limitations. In serverless architecture, applications implemented as Function as a Service (FaaS), include a set of chained event-driven microservices which have to be assigned to available instances. IoT microservices orchestration is still a challenging issue in serverless computing architecture due to IoT dynamic, heterogeneous and large-scale environment with limited resources. The integration of FaaS and distributed Deep Reinforcement Learning (DRL) can transform serverless computing by improving microservice execution effectiveness and optimizing real-time application orchestration. This combination improves scalability and adaptability across the edge-cloud continuum. In this paper, we present a novel Deep Reinforcement Learning (DRL) based microservice orchestration approach for the serverless edge-cloud continuum to minimize resource utilization and delay. This approach, unlike existing methods, is distributed and requires a minimum subset of realistic data in each interval to find optimal compositions in the proposed edge serverless architecture and is thus suitable for IoT environment. Experiments conducted using a number of real-world scenarios demonstrate improvement of the number of successfully composed applications by 18%, respectively, compared to state-of-the art methods including Load Balance, Shortest Path algorithms.
{"title":"A deep reinforcement learning approach towards distributed Function as a Service (FaaS) based edge application orchestration in cloud-edge continuum","authors":"Mina Emami Khansari, Saeed Sharifian","doi":"10.1016/j.jnca.2024.104042","DOIUrl":"10.1016/j.jnca.2024.104042","url":null,"abstract":"<div><div>Serverless computing has emerged as a new cloud computing model which in contrast to IoT offers unlimited and scalable access to resources. This paradigm improves resource utilization, cost, scalability and resource management specifically in terms of irregular incoming traffic. While cloud computing has been known as a reliable computing and storage solution to host IoT applications, it is not suitable for bandwidth limited, real time and secure applications. Therefore, shifting the resources of the cloud-edge continuum towards the edge can mitigate these limitations. In serverless architecture, applications implemented as Function as a Service (FaaS), include a set of chained event-driven microservices which have to be assigned to available instances. IoT microservices orchestration is still a challenging issue in serverless computing architecture due to IoT dynamic, heterogeneous and large-scale environment with limited resources. The integration of FaaS and distributed Deep Reinforcement Learning (DRL) can transform serverless computing by improving microservice execution effectiveness and optimizing real-time application orchestration. This combination improves scalability and adaptability across the edge-cloud continuum. In this paper, we present a novel Deep Reinforcement Learning (DRL) based microservice orchestration approach for the serverless edge-cloud continuum to minimize resource utilization and delay. This approach, unlike existing methods, is distributed and requires a minimum subset of realistic data in each interval to find optimal compositions in the proposed edge serverless architecture and is thus suitable for IoT environment. Experiments conducted using a number of real-world scenarios demonstrate improvement of the number of successfully composed applications by 18%, respectively, compared to state-of-the art methods including Load Balance, Shortest Path algorithms.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104042"},"PeriodicalIF":7.7,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142445470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-10DOI: 10.1016/j.jnca.2024.104036
Zhengqiu Weng , Weinuo Zhang , Tiantian Zhu , Zhenhao Dou , Haofei Sun , Zhanxiang Ye , Ye Tian
Advanced Persistent Threats (APTs) are prevalent in the field of cyber attacks, where attackers employ advanced techniques to control targets and exfiltrate data without being detected by the system. Existing APT detection methods heavily rely on expert rules or specific training scenarios, resulting in the lack of both generality and reliability. Therefore, this paper proposes a novel real-time APT attack anomaly detection system for large-scale provenance graphs, named RT-APT. Firstly, a provenance graph is constructed with kernel logs, and the WL subtree kernel algorithm is utilized to aggregate contextual information of nodes in the provenance graph. In this way we obtain vector representations. Secondly, the FlexSketch algorithm transforms the streaming provenance graph into a sequence of feature vectors. Finally, the K-means clustering algorithm is performed on benign feature vector sequences, where each cluster represents a different system state. Thus, we can identify abnormal behaviors during system execution. Therefore RT-APT enables to detect unknown attacks and extract long-term system behaviors. Experiments have been carried out to explore the optimal parameter settings under which RT-APT can perform best. In addition, we compare RT-APT and the state-of-the-art approaches on three datasets, Laboratory, StreamSpot and Unicorn. Results demonstrate that our proposed method outperforms the state-of-the-art approaches from the perspective of runtime performance, memory overhead and CPU usage.
{"title":"RT-APT: A real-time APT anomaly detection method for large-scale provenance graph","authors":"Zhengqiu Weng , Weinuo Zhang , Tiantian Zhu , Zhenhao Dou , Haofei Sun , Zhanxiang Ye , Ye Tian","doi":"10.1016/j.jnca.2024.104036","DOIUrl":"10.1016/j.jnca.2024.104036","url":null,"abstract":"<div><div>Advanced Persistent Threats (APTs) are prevalent in the field of cyber attacks, where attackers employ advanced techniques to control targets and exfiltrate data without being detected by the system. Existing APT detection methods heavily rely on expert rules or specific training scenarios, resulting in the lack of both generality and reliability. Therefore, this paper proposes a novel real-time APT attack anomaly detection system for large-scale provenance graphs, named RT-APT. Firstly, a provenance graph is constructed with kernel logs, and the WL subtree kernel algorithm is utilized to aggregate contextual information of nodes in the provenance graph. In this way we obtain vector representations. Secondly, the FlexSketch algorithm transforms the streaming provenance graph into a sequence of feature vectors. Finally, the K-means clustering algorithm is performed on benign feature vector sequences, where each cluster represents a different system state. Thus, we can identify abnormal behaviors during system execution. Therefore RT-APT enables to detect unknown attacks and extract long-term system behaviors. Experiments have been carried out to explore the optimal parameter settings under which RT-APT can perform best. In addition, we compare RT-APT and the state-of-the-art approaches on three datasets, Laboratory, StreamSpot and Unicorn. Results demonstrate that our proposed method outperforms the state-of-the-art approaches from the perspective of runtime performance, memory overhead and CPU usage.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104036"},"PeriodicalIF":7.7,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142527120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-10DOI: 10.1016/j.jnca.2024.104039
Mingyang Zhao, Chengtai Liu, Sifeng Zhu
With the surge of transportation data and diversification of services, the resources for data processing in intelligent transportation systems become more limited. In order to solve this problem, this paper studies the problem of computation offloading and resource allocation adopting edge computing, NOMA communication technology and edge(content) caching technology in intelligent transportation systems. The goal is to minimize the time consumption and energy consumption of the system for processing structured tasks of terminal devices by jointly optimizing the offloading decisions, caching strategies, computation resource allocation and transmission power allocation. This problem is a mixed integer nonlinear programming problem that is nonconvex. In order to solve this challenging problem, proposed a multi-task multi-objective optimization algorithm (MO-MFEA-S) with adaptive knowledge migration based on MO-MFEA. The results of a large number of simulation experiments demonstrate the convergence and effectiveness of MO-MFEA-S.
{"title":"Joint optimization scheme for task offloading and resource allocation based on MO-MFEA algorithm in intelligent transportation scenarios","authors":"Mingyang Zhao, Chengtai Liu, Sifeng Zhu","doi":"10.1016/j.jnca.2024.104039","DOIUrl":"10.1016/j.jnca.2024.104039","url":null,"abstract":"<div><div>With the surge of transportation data and diversification of services, the resources for data processing in intelligent transportation systems become more limited. In order to solve this problem, this paper studies the problem of computation offloading and resource allocation adopting edge computing, NOMA communication technology and edge(content) caching technology in intelligent transportation systems. The goal is to minimize the time consumption and energy consumption of the system for processing structured tasks of terminal devices by jointly optimizing the offloading decisions, caching strategies, computation resource allocation and transmission power allocation. This problem is a mixed integer nonlinear programming problem that is nonconvex. In order to solve this challenging problem, proposed a multi-task multi-objective optimization algorithm (MO-MFEA-S) with adaptive knowledge migration based on MO-MFEA. The results of a large number of simulation experiments demonstrate the convergence and effectiveness of MO-MFEA-S.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"233 ","pages":"Article 104039"},"PeriodicalIF":7.7,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142441835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}