Pub Date : 2024-08-30DOI: 10.1016/j.jnca.2024.104008
Guoqing Tian, Li Pan, Shijun Liu
Cloud gaming (CG), as an emergent computing paradigm, is revolutionizing the gaming industry. Currently, cloud gaming service providers (CGSPs) begin to integrate edge computing with cloud to provide services, with the aim of maximizing gaming service revenue while considering the costs incurred and the benefits generated. However, it is non-trivial to maximize gaming service revenue, as future requests are not known beforehand, and poor resource provisioning may result in exorbitant costs. In addition, the edge resource provisioning (ERP) problem in CG necessitates a trade-off between cost and inevitable queuing issues in CG systems. To address this issue, we propose ERPOL (ERP Online), a convenient and efficient approach to formulate ERP strategies for CGSPs, without requiring any future information. The performance of ERPOL has been theoretically validated and experimentally evaluated. Experiments driven by real-world traces show that it can achieve significant cost savings. The proposed approach has the potential to transform how CGSPs manage their infrastructure.
{"title":"An online cost optimization approach for edge resource provisioning in cloud gaming","authors":"Guoqing Tian, Li Pan, Shijun Liu","doi":"10.1016/j.jnca.2024.104008","DOIUrl":"10.1016/j.jnca.2024.104008","url":null,"abstract":"<div><p>Cloud gaming (CG), as an emergent computing paradigm, is revolutionizing the gaming industry. Currently, cloud gaming service providers (CGSPs) begin to integrate edge computing with cloud to provide services, with the aim of maximizing gaming service revenue while considering the costs incurred and the benefits generated. However, it is non-trivial to maximize gaming service revenue, as future requests are not known beforehand, and poor resource provisioning may result in exorbitant costs. In addition, the edge resource provisioning (ERP) problem in CG necessitates a trade-off between cost and inevitable queuing issues in CG systems. To address this issue, we propose ERPOL (ERP Online), a convenient and efficient approach to formulate ERP strategies for CGSPs, without requiring any future information. The performance of ERPOL has been theoretically validated and experimentally evaluated. Experiments driven by real-world traces show that it can achieve significant cost savings. The proposed approach has the potential to transform how CGSPs manage their infrastructure.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104008"},"PeriodicalIF":7.7,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142094756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-30DOI: 10.1016/j.jnca.2024.104009
Xiaolei Yang, Zhixin Xia, Junhui Song, Yongshan Liu
To accelerate the convergence speed and improve the accuracy of the federated shared model, this paper proposes a Federated Transfer Learning method based on frozen network parameters. The article sets up frozen two, three, and four layers network parameters, 8 sets of experimental tasks, and two target users for comparative experiments on frozen network parameters, and uses homomorphic encryption based Federated Transfer Learning to achieve secret transfer of parameters, and the accuracy, convergence speed, and loss function values of the experiment were compared and analyzed. The experiment proved that the frozen three-layer network parameter model has the highest accuracy, with the average values of the two target users being 0.9165 and 0.9164; The convergence speed is also the most ideal, with fast convergence completed after 25 iterations. The training time for the two users is also the shortest, with 1732.0s and 1787.3s, respectively; The loss function value shows that the lowest value for User-II is 0.181, while User-III is 0.2061. Finally, the unlabeled and non-empty enterprise credit data is predicted, with 61.08% of users being low-risk users. This article achieves rapid convergence of the target network model by freezing source domain network parameters in a shared network, saving computational resources.
{"title":"Credit risk prediction for small and micro enterprises based on federated transfer learning frozen network parameters","authors":"Xiaolei Yang, Zhixin Xia, Junhui Song, Yongshan Liu","doi":"10.1016/j.jnca.2024.104009","DOIUrl":"10.1016/j.jnca.2024.104009","url":null,"abstract":"<div><p>To accelerate the convergence speed and improve the accuracy of the federated shared model, this paper proposes a Federated Transfer Learning method based on frozen network parameters. The article sets up frozen two, three, and four layers network parameters, 8 sets of experimental tasks, and two target users for comparative experiments on frozen network parameters, and uses homomorphic encryption based Federated Transfer Learning to achieve secret transfer of parameters, and the accuracy, convergence speed, and loss function values of the experiment were compared and analyzed. The experiment proved that the frozen three-layer network parameter model has the highest accuracy, with the average values of the two target users being 0.9165 and 0.9164; The convergence speed is also the most ideal, with fast convergence completed after 25 iterations. The training time for the two users is also the shortest, with 1732.0s and 1787.3s, respectively; The loss function value shows that the lowest value for User-II is 0.181, while User-III is 0.2061. Finally, the unlabeled and non-empty enterprise credit data is predicted, with 61.08% of users being low-risk users. This article achieves rapid convergence of the target network model by freezing source domain network parameters in a shared network, saving computational resources.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104009"},"PeriodicalIF":7.7,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142129176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-29DOI: 10.1016/j.jnca.2024.104006
Quang Truong Vu , Phuc Tan Nguyen , Thi Hanh Nguyen , Thi Thanh Binh Huynh , Van Chien Trinh , Mikael Gidlund
In the Internet of Things (IoT) era, wireless sensor networks play a critical role in communication systems. One of the most crucial problems in wireless sensor networks is the sensor deployment problem, which attempts to provide a strategy to place the sensors within the surveillance area so that two fundamental criteria of wireless sensor networks, coverage and connectivity, are guaranteed. In this paper, we look to solve the multi-objective deployment problem so that area coverage is maximized and the number of nodes used is minimized. Since Harmony Search is a simple yet suitable algorithm for our work, we propose Harmony Search algorithm along with various enhancement proposals, including heuristic initialization, random sampling of sensor types, weighted fitness evaluation, and using different components in the fitness function, to provide a solution to the problem of sensor deployment in a heterogeneous wireless sensor network where sensors have different sensing ranges. On top of that, the probabilistic sensing model is used to reflect how the sensors work realistically. We also provide the extension of our solution to 3D areas and propose a realistic 3D dataset to evaluate it. The simulation results show that the proposed algorithms solve the area coverage problem more efficiently than previous algorithms. Our best proposal demonstrates significant improvements in coverage ratio by 10.20% and cost saving by 27.65% compared to the best baseline in a large-scale evaluation.
{"title":"Striking the perfect balance: Multi-objective optimization for minimizing deployment cost and maximizing coverage with Harmony Search","authors":"Quang Truong Vu , Phuc Tan Nguyen , Thi Hanh Nguyen , Thi Thanh Binh Huynh , Van Chien Trinh , Mikael Gidlund","doi":"10.1016/j.jnca.2024.104006","DOIUrl":"10.1016/j.jnca.2024.104006","url":null,"abstract":"<div><p>In the Internet of Things (IoT) era, wireless sensor networks play a critical role in communication systems. One of the most crucial problems in wireless sensor networks is the sensor deployment problem, which attempts to provide a strategy to place the sensors within the surveillance area so that two fundamental criteria of wireless sensor networks, coverage and connectivity, are guaranteed. In this paper, we look to solve the multi-objective deployment problem so that area coverage is maximized and the number of nodes used is minimized. Since Harmony Search is a simple yet suitable algorithm for our work, we propose Harmony Search algorithm along with various enhancement proposals, including heuristic initialization, random sampling of sensor types, weighted fitness evaluation, and using different components in the fitness function, to provide a solution to the problem of sensor deployment in a heterogeneous wireless sensor network where sensors have different sensing ranges. On top of that, the probabilistic sensing model is used to reflect how the sensors work realistically. We also provide the extension of our solution to 3D areas and propose a realistic 3D dataset to evaluate it. The simulation results show that the proposed algorithms solve the area coverage problem more efficiently than previous algorithms. Our best proposal demonstrates significant improvements in coverage ratio by 10.20% and cost saving by 27.65% compared to the best baseline in a large-scale evaluation.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104006"},"PeriodicalIF":7.7,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142137011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-23DOI: 10.1016/j.jnca.2024.104004
Arash Mahboubi , Khanh Luong , Hamed Aboutorab , Hang Thanh Bui , Geoff Jarrad , Mohammed Bahutair , Seyit Camtepe , Ganna Pogrebna , Ejaz Ahmed , Bazara Barry , Hannah Gately
In the rapidly changing cybersecurity landscape, threat hunting has become a critical proactive defense against sophisticated cyber threats. While traditional security measures are essential, their reactive nature often falls short in countering malicious actors’ increasingly advanced tactics. This paper explores the crucial role of threat hunting, a systematic, analyst-driven process aimed at uncovering hidden threats lurking within an organization’s digital infrastructure before they escalate into major incidents. Despite its importance, the cybersecurity community grapples with several challenges, including the lack of standardized methodologies, the need for specialized expertise, and the integration of cutting-edge technologies like artificial intelligence (AI) for predictive threat identification. To tackle these challenges, this survey paper offers a comprehensive overview of current threat hunting practices, emphasizing the integration of AI-driven models for proactive threat prediction. Our research explores critical questions regarding the effectiveness of various threat hunting processes and the incorporation of advanced techniques such as augmented methodologies and machine learning. Our approach involves a systematic review of existing practices, including frameworks from industry leaders like IBM and CrowdStrike. We also explore resources for intelligence ontologies and automation tools. The background section clarifies the distinction between threat hunting and anomaly detection, emphasizing systematic processes crucial for effective threat hunting. We formulate hypotheses based on hidden states and observations, examine the interplay between anomaly detection and threat hunting, and introduce iterative detection methodologies and playbooks for enhanced threat detection. Our review encompasses supervised and unsupervised machine learning approaches, reasoning techniques, graph-based and rule-based methods, as well as other innovative strategies. We identify key challenges in the field, including the scarcity of labeled data, imbalanced datasets, the need for integrating multiple data sources, the rapid evolution of adversarial techniques, and the limited availability of human expertise and data intelligence. The discussion highlights the transformative impact of artificial intelligence on both threat hunting and cybercrime, reinforcing the importance of robust hypothesis development. This paper contributes a detailed analysis of the current state and future directions of threat hunting, offering actionable insights for researchers and practitioners to enhance threat detection and mitigation strategies in the ever-evolving cybersecurity landscape.
在瞬息万变的网络安全环境中,威胁猎取已成为应对复杂网络威胁的重要主动防御手段。虽然传统的安全措施必不可少,但它们的被动性往往无法应对恶意行为者日益先进的战术。本文探讨了威胁猎取的关键作用,威胁猎取是一个系统化的、由分析师驱动的过程,旨在发现潜伏在组织数字基础设施中的隐蔽威胁,避免其升级为重大事件。尽管其重要性不言而喻,但网络安全界仍在努力应对一些挑战,包括缺乏标准化方法、对专业知识的需求,以及整合人工智能(AI)等尖端技术以进行预测性威胁识别。为了应对这些挑战,本调查报告全面概述了当前的威胁猎捕实践,强调了人工智能驱动模型在主动威胁预测中的整合。我们的研究探讨了与各种威胁猎取流程的有效性以及增强方法和机器学习等先进技术的整合有关的关键问题。我们的方法涉及对现有实践的系统回顾,包括来自 IBM 和 CrowdStrike 等行业领导者的框架。我们还探索了情报本体和自动化工具方面的资源。背景部分阐明了威胁猎取和异常检测之间的区别,强调了对有效猎取威胁至关重要的系统流程。我们根据隐藏状态和观察结果提出假设,研究异常检测和威胁猎捕之间的相互作用,并介绍用于增强威胁检测的迭代检测方法和流程。我们的综述涵盖了有监督和无监督机器学习方法、推理技术、基于图形和规则的方法以及其他创新策略。我们指出了该领域面临的主要挑战,包括标注数据稀缺、数据集不平衡、需要整合多种数据源、对抗技术的快速发展以及人类专业知识和数据智能的有限性。讨论强调了人工智能对威胁猎捕和网络犯罪的变革性影响,强化了稳健假设开发的重要性。本文详细分析了威胁猎取的现状和未来方向,为研究人员和从业人员在不断变化的网络安全环境中加强威胁检测和缓解策略提供了可行的见解。
{"title":"Evolving techniques in cyber threat hunting: A systematic review","authors":"Arash Mahboubi , Khanh Luong , Hamed Aboutorab , Hang Thanh Bui , Geoff Jarrad , Mohammed Bahutair , Seyit Camtepe , Ganna Pogrebna , Ejaz Ahmed , Bazara Barry , Hannah Gately","doi":"10.1016/j.jnca.2024.104004","DOIUrl":"10.1016/j.jnca.2024.104004","url":null,"abstract":"<div><p>In the rapidly changing cybersecurity landscape, threat hunting has become a critical proactive defense against sophisticated cyber threats. While traditional security measures are essential, their reactive nature often falls short in countering malicious actors’ increasingly advanced tactics. This paper explores the crucial role of threat hunting, a systematic, analyst-driven process aimed at uncovering hidden threats lurking within an organization’s digital infrastructure before they escalate into major incidents. Despite its importance, the cybersecurity community grapples with several challenges, including the lack of standardized methodologies, the need for specialized expertise, and the integration of cutting-edge technologies like artificial intelligence (AI) for predictive threat identification. To tackle these challenges, this survey paper offers a comprehensive overview of current threat hunting practices, emphasizing the integration of AI-driven models for proactive threat prediction. Our research explores critical questions regarding the effectiveness of various threat hunting processes and the incorporation of advanced techniques such as augmented methodologies and machine learning. Our approach involves a systematic review of existing practices, including frameworks from industry leaders like IBM and CrowdStrike. We also explore resources for intelligence ontologies and automation tools. The background section clarifies the distinction between threat hunting and anomaly detection, emphasizing systematic processes crucial for effective threat hunting. We formulate hypotheses based on hidden states and observations, examine the interplay between anomaly detection and threat hunting, and introduce iterative detection methodologies and playbooks for enhanced threat detection. Our review encompasses supervised and unsupervised machine learning approaches, reasoning techniques, graph-based and rule-based methods, as well as other innovative strategies. We identify key challenges in the field, including the scarcity of labeled data, imbalanced datasets, the need for integrating multiple data sources, the rapid evolution of adversarial techniques, and the limited availability of human expertise and data intelligence. The discussion highlights the transformative impact of artificial intelligence on both threat hunting and cybercrime, reinforcing the importance of robust hypothesis development. This paper contributes a detailed analysis of the current state and future directions of threat hunting, offering actionable insights for researchers and practitioners to enhance threat detection and mitigation strategies in the ever-evolving cybersecurity landscape.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104004"},"PeriodicalIF":7.7,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1084804524001814/pdfft?md5=7fb543744ca72ceac22267ab8ec36898&pid=1-s2.0-S1084804524001814-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142048632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-22DOI: 10.1016/j.jnca.2024.104005
Baoshan Lu , Junli Fang , Junxiu Liu , Xuemin Hong
In this paper, we address the challenge of minimizing system energy consumption for task offloading within non-line-of-sight (NLoS) mobile edge computing (MEC) environments. Our approach integrates an active reconfigurable intelligent surface (RIS) and employs a hybrid transmission scheme combining time division multiple access (TDMA) and non-orthogonal multiple access (NOMA) to enhance the quality of service (QoS) for user task offloading. The formulation of this problem as a non-convex optimization issue presents significant challenges due to its inherent complexity. To overcome this, we introduce an innovative method termed element refinement-based differential evolution (ERBDE). Initially, through rigorous theoretical analysis, we optimally determine the allocation of local computation resources, computation resources at the base station (BS), and transmit power of users, while maintaining fixed values for the offloading ratio, amplification factor, phase of the reflecting element, and the transmission period. Subsequently, we employ the differential evolution (DE) algorithm to iteratively fine-tune the offloading ratio, amplification factor, phase of the reflecting element, and transmission period towards near-optimal configurations. Our simulation results demonstrate that the implementation of active RIS-supported task offloading utilizing the hybrid TDMA-NOMA scheme results in an average system energy consumption reduction of 80.3%.
{"title":"Energy efficient multi-user task offloading through active RIS with hybrid TDMA-NOMA transmission","authors":"Baoshan Lu , Junli Fang , Junxiu Liu , Xuemin Hong","doi":"10.1016/j.jnca.2024.104005","DOIUrl":"10.1016/j.jnca.2024.104005","url":null,"abstract":"<div><p>In this paper, we address the challenge of minimizing system energy consumption for task offloading within non-line-of-sight (NLoS) mobile edge computing (MEC) environments. Our approach integrates an active reconfigurable intelligent surface (RIS) and employs a hybrid transmission scheme combining time division multiple access (TDMA) and non-orthogonal multiple access (NOMA) to enhance the quality of service (QoS) for user task offloading. The formulation of this problem as a non-convex optimization issue presents significant challenges due to its inherent complexity. To overcome this, we introduce an innovative method termed element refinement-based differential evolution (ERBDE). Initially, through rigorous theoretical analysis, we optimally determine the allocation of local computation resources, computation resources at the base station (BS), and transmit power of users, while maintaining fixed values for the offloading ratio, amplification factor, phase of the reflecting element, and the transmission period. Subsequently, we employ the differential evolution (DE) algorithm to iteratively fine-tune the offloading ratio, amplification factor, phase of the reflecting element, and transmission period towards near-optimal configurations. Our simulation results demonstrate that the implementation of active RIS-supported task offloading utilizing the hybrid TDMA-NOMA scheme results in an average system energy consumption reduction of 80.3%.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104005"},"PeriodicalIF":7.7,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142048631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-22DOI: 10.1016/j.jnca.2024.104001
Mengjie Lv, Xuanli Liu, Hui Dong, Weibei Fan
With the rapid growth of data volume, the escalating complexity of data businesses, and the increasing reliance on the Internet for daily life and production, the scale of data centers is constantly expanding. The data center network (DCN) is a bridge connecting large-scale servers in data centers for large-scale distributed computing. How to build a DCN structure that is flexible and cost-effective, while maintaining its topological properties unchanged during network expansion has become a challenging issue. In this paper, we propose an expandable and cost-effective DCN, namely HHCube, which is based on the half hypercube structure. Further, we analyze some characteristics of HHCube, including connectivity, diameter, and bisection bandwidth of the HHCube. We also design an efficient algorithm to find the shortest path between any two distinct nodes and present a fault-tolerant routing scheme to obtain a fault-tolerant path between any two distinct fault-free nodes in HHCube. Meanwhile, we present two local diagnosis algorithms to determine the status of nodes in HHCube under the PMC model and MM* model, respectively. Our results demonstrate that despite the presence of up to 25% faulty nodes in HHCube, both algorithms achieve a correct diagnosis rate exceeding 90%. Finally, we compare HHCube with state-of-the-art DCNs including Fat-Tree, DCell, BCube, Ficonn, and HSDC, and the experimental results indicate that the HHCube is an excellent candidate for constructing expandable and cost-effective DCNs.
{"title":"An expandable and cost-effective data center network","authors":"Mengjie Lv, Xuanli Liu, Hui Dong, Weibei Fan","doi":"10.1016/j.jnca.2024.104001","DOIUrl":"10.1016/j.jnca.2024.104001","url":null,"abstract":"<div><p>With the rapid growth of data volume, the escalating complexity of data businesses, and the increasing reliance on the Internet for daily life and production, the scale of data centers is constantly expanding. The data center network (DCN) is a bridge connecting large-scale servers in data centers for large-scale distributed computing. How to build a DCN structure that is flexible and cost-effective, while maintaining its topological properties unchanged during network expansion has become a challenging issue. In this paper, we propose an expandable and cost-effective DCN, namely HHCube, which is based on the half hypercube structure. Further, we analyze some characteristics of HHCube, including connectivity, diameter, and bisection bandwidth of the HHCube. We also design an efficient algorithm to find the shortest path between any two distinct nodes and present a fault-tolerant routing scheme to obtain a fault-tolerant path between any two distinct fault-free nodes in HHCube. Meanwhile, we present two local diagnosis algorithms to determine the status of nodes in HHCube under the PMC model and MM* model, respectively. Our results demonstrate that despite the presence of up to 25% faulty nodes in HHCube, both algorithms achieve a correct diagnosis rate exceeding 90%. Finally, we compare HHCube with state-of-the-art DCNs including Fat-Tree, DCell, BCube, Ficonn, and HSDC, and the experimental results indicate that the HHCube is an excellent candidate for constructing expandable and cost-effective DCNs.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104001"},"PeriodicalIF":7.7,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142089137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-20DOI: 10.1016/j.jnca.2024.104003
Ji Wan , Kai Hu , Jie Li , Yichen Guo , Hao Su , Shenzhang Li , Yafei Ye
The Consensus algorithm is the core of the permissioned blockchain, it directly affects the performance and scalability of the system. Performance is limited by the computing power and network bandwidth of a single leader node. Most existing blockchain systems adopt mesh or star topology. Blockchain performance decreases rapidly as the number of nodes increases. To solve this problem, we first design the n-k cluster tree and corresponding generation algorithm, which supports rapid reconfiguration of nodes. Then we propose the Zebra consensus algorithm, which is a cluster tree-based consensus algorithm. Compared to the PBFT, it has higher throughput and supports more nodes under the same hardware conditions. However, the tree network topology enhances scalability while also increasing latency among nodes. To reduce transaction latency, we designed the Pipeline-Zebra consensus algorithm that further improves the performance of blockchain systems in a tree network topology through parallel message propagation and block validation. The message complexity of the algorithm is O(n). Experimental results show that the performance of the algorithm proposed in this paper can reach 2.25 times that of the PBFT algorithm, and it supports four times the number of nodes under the same hardware.
{"title":"Zebra: A cluster-aware blockchain consensus algorithm","authors":"Ji Wan , Kai Hu , Jie Li , Yichen Guo , Hao Su , Shenzhang Li , Yafei Ye","doi":"10.1016/j.jnca.2024.104003","DOIUrl":"10.1016/j.jnca.2024.104003","url":null,"abstract":"<div><p>The Consensus algorithm is the core of the permissioned blockchain, it directly affects the performance and scalability of the system. Performance is limited by the computing power and network bandwidth of a single leader node. Most existing blockchain systems adopt mesh or star topology. Blockchain performance decreases rapidly as the number of nodes increases. To solve this problem, we first design the <em>n-k</em> cluster tree and corresponding generation algorithm, which supports rapid reconfiguration of nodes. Then we propose the <em>Zebra</em> consensus algorithm, which is a cluster tree-based consensus algorithm. Compared to the PBFT, it has higher throughput and supports more nodes under the same hardware conditions. However, the tree network topology enhances scalability while also increasing latency among nodes. To reduce transaction latency, we designed the <em>Pipeline-Zebra</em> consensus algorithm that further improves the performance of blockchain systems in a tree network topology through parallel message propagation and block validation. The message complexity of the algorithm is <em>O(n)</em>. Experimental results show that the performance of the algorithm proposed in this paper can reach 2.25 times that of the PBFT algorithm, and it supports four times the number of nodes under the same hardware.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104003"},"PeriodicalIF":7.7,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142048629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-18DOI: 10.1016/j.jnca.2024.104002
Onur Sahin , Vanlin Sathya
This study introduces a groundbreaking method for predicting network quality in LTE and 5G environments using only GPS data, focusing on pinpointing specific locations within a designated area to determine network quality as either good or poor. By leveraging machine learning algorithms, we have successfully demonstrated that geographical location can be a key indicator of network performance. Our research involved initially classifying network quality using traditional signal strength metrics and then shifting to rely exclusively on GPS coordinates for prediction. Employing a variety of classifiers, including Decision Tree, Random Forest, Gradient Boosting and K-Nearest Neighbors, we uncovered notable correlations between location data and network quality. This methodology provides network operators with a cost-effective and efficient tool for identifying and addressing network quality issues based on geographic insights. Additionally, we explored the potential implications of our study in various use cases, including healthcare, education, and urban industrialization, highlighting its versatility across different sectors. Our findings pave the way for innovative network management strategies, especially critical in the contexts of both LTE and the rapidly evolving landscape of 5G technology.
{"title":"Network quality prediction in a designated area using GPS data","authors":"Onur Sahin , Vanlin Sathya","doi":"10.1016/j.jnca.2024.104002","DOIUrl":"10.1016/j.jnca.2024.104002","url":null,"abstract":"<div><p>This study introduces a groundbreaking method for predicting network quality in LTE and 5G environments using only GPS data, focusing on pinpointing specific locations within a designated area to determine network quality as either good or poor. By leveraging machine learning algorithms, we have successfully demonstrated that geographical location can be a key indicator of network performance. Our research involved initially classifying network quality using traditional signal strength metrics and then shifting to rely exclusively on GPS coordinates for prediction. Employing a variety of classifiers, including Decision Tree, Random Forest, Gradient Boosting and K-Nearest Neighbors, we uncovered notable correlations between location data and network quality. This methodology provides network operators with a cost-effective and efficient tool for identifying and addressing network quality issues based on geographic insights. Additionally, we explored the potential implications of our study in various use cases, including healthcare, education, and urban industrialization, highlighting its versatility across different sectors. Our findings pave the way for innovative network management strategies, especially critical in the contexts of both LTE and the rapidly evolving landscape of 5G technology.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"231 ","pages":"Article 104002"},"PeriodicalIF":7.7,"publicationDate":"2024-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142012560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-17DOI: 10.1016/j.jnca.2024.104000
Samane Sharif, Mohammad Hossein Yaghmaee Moghaddam, Seyed Amin Hosseini Seno
The proliferation of IoT devices has led to a surge in network traffic, resulting in higher energy usage and response delays. In-network caching has emerged as a viable solution to address this issue. However, caching IoT data faces two key challenges: the transient nature of IoT content and the unknown spatiotemporal content popularity. Additionally, the use of a global view on dynamic IoT networks is problematic due to the high communication overhead involved. To tackle these challenges, this paper presents an adaptive management approach that jointly optimizes caching and communication in IoT networks using a novel bi-level control method called BC3. The approach employs two types of controllers: a global ILP-based optimal controller for long-term timeslots and local learning-based controllers for short-term timeslots. The long-term controller periodically establishes a global cache policy for the network and sends specific cache rules to each edge server. The local controller at each edge server solves the joint problem of bandwidth allocation and cache adaptation using deep reinforcement learning (DRL) technique. The main objective is to minimize energy consumption and system response time with utilizing the global and local observations. Experimental results demonstrate that the proposed approach increases cache hit rate by approximately 12% and uses 11% less energy compared to the other methods. Increasing the cache hit rate can lead to a reduction in about 17% in response time for user requests. Our bi-level control approach offers a promising solution for leveraging the network's global visibility while balancing communication overhead (as energy consumption) against system performance. Additionally, the proposed method has the lowest cache eviction, around 19% lower than the lowest eviction of the other comparison methods. The eviction metric is a metric to evaluate the effectiveness of adaptive caching approach designed for transient data.
{"title":"A hybrid Bi-level management framework for caching and communication in Edge-AI enabled IoT","authors":"Samane Sharif, Mohammad Hossein Yaghmaee Moghaddam, Seyed Amin Hosseini Seno","doi":"10.1016/j.jnca.2024.104000","DOIUrl":"10.1016/j.jnca.2024.104000","url":null,"abstract":"<div><p>The proliferation of IoT devices has led to a surge in network traffic, resulting in higher energy usage and response delays. In-network caching has emerged as a viable solution to address this issue. However, caching IoT data faces two key challenges: the transient nature of IoT content and the unknown spatiotemporal content popularity. Additionally, the use of a global view on dynamic IoT networks is problematic due to the high communication overhead involved. To tackle these challenges, this paper presents an adaptive management approach that jointly optimizes caching and communication in IoT networks using a novel bi-level control method called BC3. The approach employs two types of controllers: a global ILP-based optimal controller for long-term timeslots and local learning-based controllers for short-term timeslots. The long-term controller periodically establishes a global cache policy for the network and sends specific cache rules to each edge server. The local controller at each edge server solves the joint problem of bandwidth allocation and cache adaptation using deep reinforcement learning (DRL) technique. The main objective is to minimize energy consumption and system response time with utilizing the global and local observations. Experimental results demonstrate that the proposed approach increases cache hit rate by approximately 12% and uses 11% less energy compared to the other methods. Increasing the cache hit rate can lead to a reduction in about 17% in response time for user requests. Our bi-level control approach offers a promising solution for leveraging the network's global visibility while balancing communication overhead (as energy consumption) against system performance. Additionally, the proposed method has the lowest cache eviction, around 19% lower than the lowest eviction of the other comparison methods. The eviction metric is a metric to evaluate the effectiveness of adaptive caching approach designed for transient data.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"232 ","pages":"Article 104000"},"PeriodicalIF":7.7,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142048630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-14DOI: 10.1016/j.jnca.2024.103998
Lingling Zi, Xin Cong
Current blockchain systems have high requirements on network connection and data transmission rate, for example, nodes have to receive the latest blocks in time to update the blockchain, nodes have to immediately broadcast the generated block to other nodes for consensus, which restricts the blockchain to run only on real-time connection networks, but the existence of delay tolerant networks poses a great challenge to the deployment of blockchain systems. To address this challenge, a novel blockchain transaction mechanism is proposed. First, the block structure is modified by adding a flag, and on this basis, the definition of the extrachain is proposed. Secondly, based on the blockchain transaction process, transaction verification and consensus algorithms on the extrachain are presented. Thirdly, both the extrachain selection algorithm and appending algorithm are proposed, so that the extrachain can be appended to the blockchain fairly and safely. Finally, an extrachain transmission scheme is presented to broadcast the blocks generated in the delayed network to the normal network. Theoretical analysis and simulation experiments further illustrate the efficiency of the proposed mechanism.
{"title":"A blockchain transaction mechanism in the delay tolerant network","authors":"Lingling Zi, Xin Cong","doi":"10.1016/j.jnca.2024.103998","DOIUrl":"10.1016/j.jnca.2024.103998","url":null,"abstract":"<div><p>Current blockchain systems have high requirements on network connection and data transmission rate, for example, nodes have to receive the latest blocks in time to update the blockchain, nodes have to immediately broadcast the generated block to other nodes for consensus, which restricts the blockchain to run only on real-time connection networks, but the existence of delay tolerant networks poses a great challenge to the deployment of blockchain systems. To address this challenge, a novel blockchain transaction mechanism is proposed. First, the block structure is modified by adding a flag, and on this basis, the definition of the extrachain is proposed. Secondly, based on the blockchain transaction process, transaction verification and consensus algorithms on the extrachain are presented. Thirdly, both the extrachain selection algorithm and appending algorithm are proposed, so that the extrachain can be appended to the blockchain fairly and safely. Finally, an extrachain transmission scheme is presented to broadcast the blocks generated in the delayed network to the normal network. Theoretical analysis and simulation experiments further illustrate the efficiency of the proposed mechanism.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"231 ","pages":"Article 103998"},"PeriodicalIF":7.7,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141981088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}