Pub Date : 2025-02-01DOI: 10.1016/j.comnet.2024.111001
Shuhui Yang , Jian Chen , Jie Jia , Liang Guo , Xingwei Wang
This paper investigates a novel non-orthogonal multiple access (NOMA) assisted ultra-reliable low-latency communications (URLLC) paradigm. By allowing multiple users to occupy the same time or frequency resource blocks with the NOMA technique, the transmission delays and network congestion in multi-user URLLC communication systems are thus mitigated. Furthermore, the simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS) is employed to provide full-space coverage by proactively reconfiguring the electromagnetic propagation environment. The energy consumption optimization problem is formulated by refining blocklength assignment and jointly optimizing the power allocation and the STAR-RIS configuration. Due to highly coupled variables and the non-convexity of constraints, the optimization problem is proven to be NP-hard. To tackle the intractable problem, we propose an alternating optimization (AO)-based solution combining adaptive genetic algorithm (AGA), and successive convex approximation (SCA) methods. Numerical results demonstrate that: (i) the proposed refined blocklength assignment scheme outperforms that with fixed blocklength assignment; (ii) the proposed STAR-RIS-assisted NOMA system is superior to the conventional RIS-assisted NOMA system.
{"title":"Joint resource allocation and blocklength assignment in STAR-RIS and NOMA-assisted URLLC systems","authors":"Shuhui Yang , Jian Chen , Jie Jia , Liang Guo , Xingwei Wang","doi":"10.1016/j.comnet.2024.111001","DOIUrl":"10.1016/j.comnet.2024.111001","url":null,"abstract":"<div><div>This paper investigates a novel non-orthogonal multiple access (NOMA) assisted ultra-reliable low-latency communications (URLLC) paradigm. By allowing multiple users to occupy the same time or frequency resource blocks with the NOMA technique, the transmission delays and network congestion in multi-user URLLC communication systems are thus mitigated. Furthermore, the simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS) is employed to provide full-space coverage by proactively reconfiguring the electromagnetic propagation environment. The energy consumption optimization problem is formulated by refining blocklength assignment and jointly optimizing the power allocation and the STAR-RIS configuration. Due to highly coupled variables and the non-convexity of constraints, the optimization problem is proven to be NP-hard. To tackle the intractable problem, we propose an alternating optimization (AO)-based solution combining adaptive genetic algorithm (AGA), and successive convex approximation (SCA) methods. Numerical results demonstrate that: (i) the proposed refined blocklength assignment scheme outperforms that with fixed blocklength assignment; (ii) the proposed STAR-RIS-assisted NOMA system is superior to the conventional RIS-assisted NOMA system.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"257 ","pages":"Article 111001"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.comnet.2024.110995
Pankaj Chaudhary, Neminath Hubballi
The performance of the caching system in Named Data Networks (NDN) is influenced by the way requests are routed and contents are cached. Due to the limited capacity of caches available at routers, different caching techniques have emerged to effectively use the available space. Cooperative content searching and caching improves performance, and this comes with additional communication overhead. In this paper, we present a popularity-based, lightweight neighborhood cooperative content searching and caching system PeNCache for NDN architecture to improve the overall performance. PeNCache reactively explores all neighborhood routers with the aim of retrieving content from nearby routers while routing requests towards the content source. To aid its caching decisions, PeNCache considers the local and global popularity into account. Global popularity is estimated by a set of designated nodes in the network who periodically exchange local popularity information to help derive global popularity. We present details of how Interest packets and Data packets are processed at each router and also how popularity estimation is done in PeNCache. We perform a simulation study to evaluate the performance of PeNCache on realistic network topologies using a discrete event simulator. Outcomes of simulation demonstrate that it outperforms state-of-the-art caching schemes in terms of cache hit ratio, content access time, average hit distance, and cache diversity.
{"title":"PeNCache: Popularity based cooperative caching in Named Data Networks","authors":"Pankaj Chaudhary, Neminath Hubballi","doi":"10.1016/j.comnet.2024.110995","DOIUrl":"10.1016/j.comnet.2024.110995","url":null,"abstract":"<div><div>The performance of the caching system in Named Data Networks (NDN) is influenced by the way requests are routed and contents are cached. Due to the limited capacity of caches available at routers, different caching techniques have emerged to effectively use the available space. Cooperative content searching and caching improves performance, and this comes with additional communication overhead. In this paper, we present a popularity-based, lightweight neighborhood cooperative content searching and caching system <span>PeNCache</span> for NDN architecture to improve the overall performance. <span>PeNCache</span> reactively explores all neighborhood routers with the aim of retrieving content from nearby routers while routing requests towards the content source. To aid its caching decisions, <span>PeNCache</span> considers the local and global popularity into account. Global popularity is estimated by a set of designated nodes in the network who periodically exchange local popularity information to help derive global popularity. We present details of how Interest packets and Data packets are processed at each router and also how popularity estimation is done in <span>PeNCache</span>. We perform a simulation study to evaluate the performance of <span>PeNCache</span> on realistic network topologies using a discrete event simulator. Outcomes of simulation demonstrate that it outperforms state-of-the-art caching schemes in terms of cache hit ratio, content access time, average hit distance, and cache diversity.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"257 ","pages":"Article 110995"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.comnet.2024.111021
Kaifeng Hua , Shengchao Su , Yiwang Wang
The restricted coverage of edge servers in the Internet of Vehicles (IoV) results in service migration as vehicles traverse various regions, potentially escalating operational costs and diminishing service quality. However, existing service migration schemes inadequately address the dynamic attributes of high-speed mobile vehicles and the temporal variability of the network. To overcome this issue, we propose a mobility-aware deep reinforcement learning framework based on vehicle behavior prediction for service migration. Firstly, taking the service processing latency, migration latency, and energy consumption as metrics, a constrained model is established to minimize long-term costs. Given the considerable uncertainty in the associational behaviors between high-speed mobile vehicles and edge servers, a vehicle behavior prediction method utilizing the Hidden Markov Model (HMM) is then proposed. On this basis, we design a mobility-aware reinforcement learning service migration algorithm based on a Double Dueling Deep Q-Network (D3RLSM) incorporating a prioritized experience replay mechanism to extract vehicular state features accurately and optimize the training process. Compared with several baseline methods, D3RLSM shows its effectiveness in reducing service latency and energy consumption.
{"title":"Intelligent service migration for the internet of vehicles in edge computing: A mobility-aware deep reinforcement learning framework","authors":"Kaifeng Hua , Shengchao Su , Yiwang Wang","doi":"10.1016/j.comnet.2024.111021","DOIUrl":"10.1016/j.comnet.2024.111021","url":null,"abstract":"<div><div>The restricted coverage of edge servers in the Internet of Vehicles (IoV) results in service migration as vehicles traverse various regions, potentially escalating operational costs and diminishing service quality. However, existing service migration schemes inadequately address the dynamic attributes of high-speed mobile vehicles and the temporal variability of the network. To overcome this issue, we propose a mobility-aware deep reinforcement learning framework based on vehicle behavior prediction for service migration. Firstly, taking the service processing latency, migration latency, and energy consumption as metrics, a constrained model is established to minimize long-term costs. Given the considerable uncertainty in the associational behaviors between high-speed mobile vehicles and edge servers, a vehicle behavior prediction method utilizing the Hidden Markov Model (HMM) is then proposed. On this basis, we design a mobility-aware <u>r</u>einforcement <u>l</u>earning <u>s</u>ervice <u>m</u>igration algorithm based on a <u>D</u>ouble <u>D</u>ueling <u>D</u>eep <em>Q</em>-Network (D3RLSM) incorporating a prioritized experience replay mechanism to extract vehicular state features accurately and optimize the training process. Compared with several baseline methods, D3RLSM shows its effectiveness in reducing service latency and energy consumption.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"257 ","pages":"Article 111021"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.comnet.2024.111012
Ahmed Al-hamadani, Gábor Lencse
The Benchmarking Working Group of IETF has published a comprehensive methodology in its RFC 8219 for benchmarking IPv6 transition technologies. The Mapping of Address and Port using Translation (MAP-T) is one of the most prominent of these technologies, which is also considered a stateless IPv4-as-a-Service (IPv4aaS) technology that belongs to the double translation category in RFC 8219. This paper presents the design and implementation of Maptperf, the World's first MAP-T benchmarking tool that complies with the guidelines of RFC 8219 to test the performance of the Border Relay (BR) router device of the technology since it is considered the focal point of its scalability. As part of the work accomplished in this paper, several design considerations, operational requirements, and configuration settings are discussed. Then, a detailed description of the implementation is disclosed, along with various important design decisions that are considered regarding implementation. Finally, the research findings related to Maptperf for two tests, the performance estimation and the functional tests, are presented. The performance estimation test proves how fast and robust Maptperf is via an initial assessment of its performance, while the functional tests include four types of measurements: Throughput, Frame Loss Rate (FLR), Latency, and Packet Delay Variation (PDV) for MAP-T implementations. For the latter case, the authors chose a popular MAP-T BR implementation, Jool, whose function is also validated via a testbed installed for this purpose.
{"title":"Maptperf: An RFC 8219 compliant tester for benchmarking MAP-T border relay routers","authors":"Ahmed Al-hamadani, Gábor Lencse","doi":"10.1016/j.comnet.2024.111012","DOIUrl":"10.1016/j.comnet.2024.111012","url":null,"abstract":"<div><div>The Benchmarking Working Group of IETF has published a comprehensive methodology in its RFC 8219 for benchmarking IPv6 transition technologies. The Mapping of Address and Port using Translation (MAP-T) is one of the most prominent of these technologies, which is also considered a stateless IPv4-as-a-Service (IPv4aaS) technology that belongs to the double translation category in RFC 8219. This paper presents the design and implementation of Maptperf, the World's first MAP-T benchmarking tool that complies with the guidelines of RFC 8219 to test the performance of the Border Relay (BR) router device of the technology since it is considered the focal point of its scalability. As part of the work accomplished in this paper, several design considerations, operational requirements, and configuration settings are discussed. Then, a detailed description of the implementation is disclosed, along with various important design decisions that are considered regarding implementation. Finally, the research findings related to Maptperf for two tests, the performance estimation and the functional tests, are presented. The performance estimation test proves how fast and robust Maptperf is via an initial assessment of its performance, while the functional tests include four types of measurements: Throughput, Frame Loss Rate (FLR), Latency, and Packet Delay Variation (PDV) for MAP-T implementations. For the latter case, the authors chose a popular MAP-T BR implementation, Jool, whose function is also validated via a testbed installed for this purpose.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"257 ","pages":"Article 111012"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Non-terrestrial networks (NTN) is becoming an attractive approach in the beyond 5G/6G era to enable ubiquitous connectivity, particularly in areas that are currently uncovered or underserved. NTN provides extensive coverage from the sky by utilizing satellites and unmanned aerial vehicles (UAVs) as mobile network nodes, such as base stations and routers. However, the mobility of these nodes in NTN leads to dynamic changes in network topology, which in turn reduces the opportunities and duration of NTN-ground communication. Additionally, variations in the communication environment, such as weather conditions, cause fluctuations in link quality and availability. Consequently, NTN faces challenges in maintaining a high packet delivery rate due to its dynamic topology and communication environment. This paper proposes a path selection method that uses link information-based path selection rule prediction in NTN. The proposed method selects paths based on rules predicted by a link information-based rule prediction model using machine learning (ML). The rule prediction model is trained using a dataset obtained through simulations of various NTN training scenarios. Simulation results over four evaluation scenarios show that the proposed method outperforms the existing methods in terms of packet delivery rate and its stability, even under severe weather conditions. The results further indicate that each path selection rule contributes to packet delivery, with the selective use of multiple path selection rules enabling the proposed method to adapt to various situations.
{"title":"A path selection method based on rule prediction in non-terrestrial networks","authors":"Tomohiro Korikawa, Chikako Takasaki, Kyota Hattori","doi":"10.1016/j.comnet.2024.110958","DOIUrl":"10.1016/j.comnet.2024.110958","url":null,"abstract":"<div><div>Non-terrestrial networks (NTN) is becoming an attractive approach in the beyond 5G/6G era to enable ubiquitous connectivity, particularly in areas that are currently uncovered or underserved. NTN provides extensive coverage from the sky by utilizing satellites and unmanned aerial vehicles (UAVs) as mobile network nodes, such as base stations and routers. However, the mobility of these nodes in NTN leads to dynamic changes in network topology, which in turn reduces the opportunities and duration of NTN-ground communication. Additionally, variations in the communication environment, such as weather conditions, cause fluctuations in link quality and availability. Consequently, NTN faces challenges in maintaining a high packet delivery rate due to its dynamic topology and communication environment. This paper proposes a path selection method that uses link information-based path selection rule prediction in NTN. The proposed method selects paths based on rules predicted by a link information-based rule prediction model using machine learning (ML). The rule prediction model is trained using a dataset obtained through simulations of various NTN training scenarios. Simulation results over four evaluation scenarios show that the proposed method outperforms the existing methods in terms of packet delivery rate and its stability, even under severe weather conditions. The results further indicate that each path selection rule contributes to packet delivery, with the selective use of multiple path selection rules enabling the proposed method to adapt to various situations.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"257 ","pages":"Article 110958"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.comnet.2025.111040
Giancarlo Sciddurlo , Pietro Camarda , Domenico Striccoli , Ilaria Cianci , Giuseppe Piro , Gennaro Boggia
The Internet of Everything has emerged as a prominent paradigm, enabling the development of advanced services by integrating smart objects, individuals, processes, and data. In the context of social networking within this framework, addressing the inherent uncertainty of the environment and developing secure service provisioning mechanisms is crucial. At present, there has been limited exploration into the stochastic behavior of the service fulfillment process, especially when considering the trustworthiness and resource availability of service providers. Additionally, existing approaches supporting service provisioning often require continuous and computationally prohibitive efforts. To overcome these challenges, this paper introduces a Markov chain-based stochastic model that effectively predicts the steady-state behavior of service providers within an IoE network. The proposed model integrates both the trust levels and resource capabilities of providers to ensure successful service delivery, while simultaneously identifying and excluding malicious entities without imposing significant computational overhead. The validity of the model is demonstrated by comparing various performance metrics against results obtained from extensive simulations, highlighting its effectiveness and practical applicability. Ultimately, the model serves as a valuable tool for fostering trusted service provisioning, optimizing the design of service communities within social networks, preventing data traffic loss, and enhancing the overall reliability and responsiveness of the system.
{"title":"Markov chain-based analytical model supporting service provisioning and network design in the Social Internet of Everything","authors":"Giancarlo Sciddurlo , Pietro Camarda , Domenico Striccoli , Ilaria Cianci , Giuseppe Piro , Gennaro Boggia","doi":"10.1016/j.comnet.2025.111040","DOIUrl":"10.1016/j.comnet.2025.111040","url":null,"abstract":"<div><div>The Internet of Everything has emerged as a prominent paradigm, enabling the development of advanced services by integrating smart objects, individuals, processes, and data. In the context of social networking within this framework, addressing the inherent uncertainty of the environment and developing secure service provisioning mechanisms is crucial. At present, there has been limited exploration into the stochastic behavior of the service fulfillment process, especially when considering the trustworthiness and resource availability of service providers. Additionally, existing approaches supporting service provisioning often require continuous and computationally prohibitive efforts. To overcome these challenges, this paper introduces a Markov chain-based stochastic model that effectively predicts the steady-state behavior of service providers within an IoE network. The proposed model integrates both the trust levels and resource capabilities of providers to ensure successful service delivery, while simultaneously identifying and excluding malicious entities without imposing significant computational overhead. The validity of the model is demonstrated by comparing various performance metrics against results obtained from extensive simulations, highlighting its effectiveness and practical applicability. Ultimately, the model serves as a valuable tool for fostering trusted service provisioning, optimizing the design of service communities within social networks, preventing data traffic loss, and enhancing the overall reliability and responsiveness of the system.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"258 ","pages":"Article 111040"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143176535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Malicious URL classification represents a crucial aspect of cybersecurity. Although existing work comprises numerous machine learning and deep learning-based URL classification models, most suffer from generalisation and domain-adaptation issues arising from the lack of representative training datasets. Furthermore, these models fail to provide explanations for a given URL classification in natural human language. In this work, we investigate and demonstrate the use of Large Language Models (LLMs) to address this issue. Specifically, we propose an LLM-based one-shot learning framework to predict whether a given URL is benign or phishing. Inspired by work done in the area of Chain-of-Thought reasoning, our framework draws on LLMs’ reasoning capabilities to produce more accurate predictions. We evaluate our framework using three URL datasets and five state-of-the-art LLMs, and show that one-shot LLM prompting indeed provides performances close to supervised models, with GPT 4-Turbo being the best model returning an average F1 score of 0.92 in the one-shot setting. We conduct a quantitative analysis of the LLM explanations and show that most of the explanations provided by LLMs align with the post-hoc explanations of the supervised classifiers, and the explanations have high readability, coherency, and informativeness.
{"title":"LLMs are one-shot URL classifiers and explainers","authors":"Fariza Rashid , Nishavi Ranaweera , Ben Doyle , Suranga Seneviratne","doi":"10.1016/j.comnet.2024.111004","DOIUrl":"10.1016/j.comnet.2024.111004","url":null,"abstract":"<div><div>Malicious URL classification represents a crucial aspect of cybersecurity. Although existing work comprises numerous machine learning and deep learning-based URL classification models, most suffer from generalisation and domain-adaptation issues arising from the lack of representative training datasets. Furthermore, these models fail to provide explanations for a given URL classification in natural human language. In this work, we investigate and demonstrate the use of Large Language Models (LLMs) to address this issue. Specifically, we propose an LLM-based one-shot learning framework to predict whether a given URL is benign or phishing. Inspired by work done in the area of Chain-of-Thought reasoning, our framework draws on LLMs’ reasoning capabilities to produce more accurate predictions. We evaluate our framework using three URL datasets and five state-of-the-art LLMs, and show that one-shot LLM prompting indeed provides performances close to supervised models, with GPT 4-Turbo being the best model returning an average F1 score of 0.92 in the one-shot setting. We conduct a quantitative analysis of the LLM explanations and show that most of the explanations provided by LLMs align with the post-hoc explanations of the supervised classifiers, and the explanations have high readability, coherency, and informativeness.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"258 ","pages":"Article 111004"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143177345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.comnet.2024.111003
Yunqi Sun , Hesheng Sun , Tuo Cao , Mingtao Ji , Zhuzhong Qian , Lingkun Meng , Dongxu Wang , Xiangyu Li
Virtual Reality Serious Games (VR-SGs) integrate immersive virtual reality (VR) technology with instruction-oriented serious games (SGs), aiming to improve the efficiency of educational and training programs. VR-SG’s training effectiveness is highly contingent upon the system’s continuous consistency level. The strong continuous consistency ensures the same VR-SG world among different players, enabling them to make better decisions based on the individual game world’s context. Although edge computing enables a low-delay VR system for geographically dispersed players, the delay differences among players highlight the need for strong continuous consistency. Specifically, the differences in temporal and spatial dimensions among different end-players result in significant variations in their perceived end-to-end delay, further exhibiting different game worlds. We first propose a long-term task redistribution problem to enhance the continuous consistency for edge-assisted VR-SGs while controlling the consistency loss and player-perceived delay. To solve the above time-coupled problem, we design an online polynomial-time algorithm called the Online Continuous Consistency Enhancement (OCCE) algorithm. OCCE can effectively obtain the task redistribution scheme with the integrated randomized rounding and the Constraints-Firefighter Algorithm. We prove that the continuous consistency optimality of OCCE can approximate the optimal offline solution. Finally, the extensive evaluations based on real-world datasets and preparatory measurements show that, at the player scale of 30, OCCE improves continuous consistency by at least 2.38 compared to alternatives in the average case.
{"title":"Towards strong continuous consistency in edge-assisted VR-SGs: Delay-differences sensitive online task redistribution","authors":"Yunqi Sun , Hesheng Sun , Tuo Cao , Mingtao Ji , Zhuzhong Qian , Lingkun Meng , Dongxu Wang , Xiangyu Li","doi":"10.1016/j.comnet.2024.111003","DOIUrl":"10.1016/j.comnet.2024.111003","url":null,"abstract":"<div><div>Virtual Reality Serious Games (VR-SGs) integrate immersive virtual reality (VR) technology with instruction-oriented serious games (SGs), aiming to improve the efficiency of educational and training programs. VR-SG’s training effectiveness is highly contingent upon the system’s continuous consistency level. The strong continuous consistency ensures the same VR-SG world among different players, enabling them to make better decisions based on the individual game world’s context. Although edge computing enables a low-delay VR system for geographically dispersed players, the delay differences among players highlight the need for strong continuous consistency. Specifically, the differences in temporal and spatial dimensions among different end-players result in significant variations in their perceived end-to-end delay, further exhibiting different game worlds. We first propose a long-term task redistribution problem to enhance the continuous consistency for edge-assisted VR-SGs while controlling the consistency loss and player-perceived delay. To solve the above time-coupled problem, we design an online polynomial-time algorithm called the <strong>O</strong>nline <strong>C</strong>ontinuous <strong>C</strong>onsistency <strong>E</strong>nhancement (<strong>OCCE</strong>) algorithm. OCCE can effectively obtain the task redistribution scheme with the integrated randomized rounding and the Constraints-Firefighter Algorithm. We prove that the continuous consistency optimality of OCCE can approximate the optimal offline solution. Finally, the extensive evaluations based on real-world datasets and preparatory measurements show that, at the player scale of 30, OCCE improves continuous consistency by at least 2.38<span><math><mo>×</mo></math></span> compared to alternatives in the average case.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"258 ","pages":"Article 111003"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143177347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.comnet.2024.111030
Linbo Zhai, Ping Zhao, Kai Xue, Yumei Li, Chen Cheng
As mobile augmented reality (MAR) applications continue to develop and spread, more and more data-intensive and computationally intensive tasks are sensitive to delay and energy consumption. The emergence of mobile edge computing provides an effective solution for the demand of low latency and low energy consumption. This paper studies the task offloading and multi-cache placement of MAR applications in a three-tier transmission system composed of edge servers, edge data centers and remote cloud. Under the constraint of computing resources and cache space, the task offloading and cache placement problems are formulated to maximize the energy efficiency function including task completion rate and energy consumption. To solve this problem, we design a task offloading and multi-cache placement algorithm based on block coordinate descent. Firstly, we generate the priority queue for the tasks. Then, caches are placed according to the popularity of each cache and the cache space ratio of each edge node. Tasks are offloaded according to the priority of each edge node. After the initialization, the task offloading strategy and cache placement strategy are optimized using block coordinate descent. Finally, we update the optimized cache placement strategy and task offloading strategy until the objective function converges. Simulation results show that our algorithm can significantly shorten service delay and reduce energy consumption compared with other algorithms.
{"title":"Task offloading and multi-cache placement in multi-access mobile edge computing","authors":"Linbo Zhai, Ping Zhao, Kai Xue, Yumei Li, Chen Cheng","doi":"10.1016/j.comnet.2024.111030","DOIUrl":"10.1016/j.comnet.2024.111030","url":null,"abstract":"<div><div>As mobile augmented reality (MAR) applications continue to develop and spread, more and more data-intensive and computationally intensive tasks are sensitive to delay and energy consumption. The emergence of mobile edge computing provides an effective solution for the demand of low latency and low energy consumption. This paper studies the task offloading and multi-cache placement of MAR applications in a three-tier transmission system composed of edge servers, edge data centers and remote cloud. Under the constraint of computing resources and cache space, the task offloading and cache placement problems are formulated to maximize the energy efficiency function including task completion rate and energy consumption. To solve this problem, we design a task offloading and multi-cache placement algorithm based on block coordinate descent. Firstly, we generate the priority queue for the tasks. Then, caches are placed according to the popularity of each cache and the cache space ratio of each edge node. Tasks are offloaded according to the priority of each edge node. After the initialization, the task offloading strategy and cache placement strategy are optimized using block coordinate descent. Finally, we update the optimized cache placement strategy and task offloading strategy until the objective function converges. Simulation results show that our algorithm can significantly shorten service delay and reduce energy consumption compared with other algorithms.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"258 ","pages":"Article 111030"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143177370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.comnet.2024.111024
Ana Almeida , Pedro Rito , Susana Brás , Filipe Cabral Pinto , Susana Sargento
The interdependence between urban mobility and 5G networks can bring several advantages for both domains. By exploring this dynamic symbiosis, we can uncover opportunities to enhance the performance, efficiency, and safety of urban transportation systems while leveraging the capabilities of 5G networks to provide strong connectivity, high data rate, and low-latency communications. This work explores their relationship and shows that we can use the urban mobility data of vehicles on the roads to predict the mobile communication network usage, and the opposite, the network data to predict the urban mobility. We analyze the correlation between urban mobility and the mobile communication network usage, finding strong correlations between the number of vehicles in each road direction, measured by the radars, and the usage of 5G base stations nearby. We then use the information from the radars data to predict handovers between different 5G gNBs and the network traffic, and vice versa, using techniques like LightGBM. We generate a mobility metric using Principal Component Analysis (PCA), and we infer the mobility data from 5G network data and vice versa, creating areas of interest by grouping nearby 5G stations and radars. We observe that, in most cases, we can achieve good results in the inference and prediction using LightGBM. This is extremely relevant to adapting the network resources in dynamic 5G slices while also predicting urban load and adapting the traffic management on the roads.
{"title":"Exploring the dynamic symbiosis of urban mobility and 5G networks","authors":"Ana Almeida , Pedro Rito , Susana Brás , Filipe Cabral Pinto , Susana Sargento","doi":"10.1016/j.comnet.2024.111024","DOIUrl":"10.1016/j.comnet.2024.111024","url":null,"abstract":"<div><div>The interdependence between urban mobility and 5G networks can bring several advantages for both domains. By exploring this dynamic symbiosis, we can uncover opportunities to enhance the performance, efficiency, and safety of urban transportation systems while leveraging the capabilities of 5G networks to provide strong connectivity, high data rate, and low-latency communications. This work explores their relationship and shows that we can use the urban mobility data of vehicles on the roads to predict the mobile communication network usage, and the opposite, the network data to predict the urban mobility. We analyze the correlation between urban mobility and the mobile communication network usage, finding strong correlations between the number of vehicles in each road direction, measured by the radars, and the usage of 5G base stations nearby. We then use the information from the radars data to predict handovers between different 5G gNBs and the network traffic, and vice versa, using techniques like LightGBM. We generate a mobility metric using Principal Component Analysis (PCA), and we infer the mobility data from 5G network data and vice versa, creating areas of interest by grouping nearby 5G stations and radars. We observe that, in most cases, we can achieve good results in the inference and prediction using LightGBM. This is extremely relevant to adapting the network resources in dynamic 5G slices while also predicting urban load and adapting the traffic management on the roads.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"258 ","pages":"Article 111024"},"PeriodicalIF":4.4,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143178142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}