Pub Date : 2026-02-01Epub Date: 2025-12-27DOI: 10.1016/j.comcom.2025.108406
Carles Gomez , Jon Crowcroft
Traditionally, Internet of Things (IoT) communication technologies have been designed to offer low bit rates (from ∼102 to ∼106 bit/s). However, recent IoT-intended technologies like 5G Reduced Capability (RedCap) support significantly greater bit rates (up to ∼108 bit/s), enabling emerging IoT use cases that demand greater capacity. Thus, the spectrum of IoT scenarios and corresponding requirements is expanding, a trend which is expected to continue with 6G networks. In this context, support, configuration and performance of a crucial upper-layer protocol like TCP become challenging. In this paper, based on our IETF standardization work, we describe how TCP can run suitably on a wide variety of IoT environments (from highly constrained scenarios to resource-rich ones). Furthermore, we present and study the novel TCP option called TCP Acknowledgment Rate Request (TARR), designed for further TCP adaptability, which is particularly useful for current and future IoT networks.
{"title":"Making TCP IoT-friendly towards the 6G era","authors":"Carles Gomez , Jon Crowcroft","doi":"10.1016/j.comcom.2025.108406","DOIUrl":"10.1016/j.comcom.2025.108406","url":null,"abstract":"<div><div>Traditionally, Internet of Things (IoT) communication technologies have been designed to offer low bit rates (from ∼10<sup>2</sup> to ∼10<sup>6</sup> bit/s). However, recent IoT-intended technologies like 5G Reduced Capability (RedCap) support significantly greater bit rates (up to ∼10<sup>8</sup> bit/s), enabling emerging IoT use cases that demand greater capacity. Thus, the spectrum of IoT scenarios and corresponding requirements is expanding, a trend which is expected to continue with 6G networks. In this context, support, configuration and performance of a crucial upper-layer protocol like TCP become challenging. In this paper, based on our IETF standardization work, we describe how TCP can run suitably on a wide variety of IoT environments (from highly constrained scenarios to resource-rich ones). Furthermore, we present and study the novel TCP option called TCP Acknowledgment Rate Request (TARR), designed for further TCP adaptability, which is particularly useful for current and future IoT networks.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"247 ","pages":"Article 108406"},"PeriodicalIF":4.3,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145884354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-12-04DOI: 10.1016/j.comcom.2025.108398
Alessandro Perlo , Carla Fabiana Chiasserini , Gustavo De Veciana , Francesco Malandrino
In classification tasks, it is critical to accurately distinguish between specific classes, as misclassifications can undermine system reliability and user trust. In this paper, we study how client selection in both centralized and federated learning environments affects the performance of classification models trained on heterogeneous data. When training datasets across clients are statistically diverse, careful client selection becomes crucial to improve the ability of the model to discriminate between classes, while preserving privacy. In particular, we introduce a novel metric based on conformal prediction outcomes – the conformal correlation matrix – which captures the likelihood of class pairs co-occurring within conformal prediction sets. Unlike the traditional confusion matrix, which quantifies actual misclassifications, our metric characterizes potential ambiguities between classes, thus offering a complementary perspective on model performance and uncertainty. Through a series of examples, we demonstrate how our proposed metric can guide informed client selection and enhance model performance in both centralized and federated training settings. Our results highlight the potential of conformal-based metrics to improve classification reliability while safeguarding sensitive information about individual client data.
{"title":"Characterizing the performance of classification models through conformal correlation matrices","authors":"Alessandro Perlo , Carla Fabiana Chiasserini , Gustavo De Veciana , Francesco Malandrino","doi":"10.1016/j.comcom.2025.108398","DOIUrl":"10.1016/j.comcom.2025.108398","url":null,"abstract":"<div><div>In classification tasks, it is critical to accurately distinguish between specific classes, as misclassifications can undermine system reliability and user trust. In this paper, we study how client selection in both centralized and federated learning environments affects the performance of classification models trained on heterogeneous data. When training datasets across clients are statistically diverse, careful client selection becomes crucial to improve the ability of the model to discriminate between classes, while preserving privacy. In particular, we introduce a novel metric based on conformal prediction outcomes – the conformal correlation matrix – which captures the likelihood of class pairs co-occurring within conformal prediction sets. Unlike the traditional confusion matrix, which quantifies actual misclassifications, our metric characterizes potential ambiguities between classes, thus offering a complementary perspective on model performance and uncertainty. Through a series of examples, we demonstrate how our proposed metric can guide informed client selection and enhance model performance in both centralized and federated training settings. Our results highlight the potential of conformal-based metrics to improve classification reliability while safeguarding sensitive information about individual client data.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"247 ","pages":"Article 108398"},"PeriodicalIF":4.3,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-11-05DOI: 10.1016/j.comcom.2025.108351
Xiguang Li , Junlong Li , Yunhe Sun , Ammar Muthanna , Ammar Hawbani , Liang Zhao
Vehicular Edge Computing (VEC) faces significant challenges in jointly managing caching and task offloading due to dynamic network conditions and resource constraints. This paper proposes a novel framework that addresses these challenges through a synergistic three-stage process. The innovation lies in the tight integration of our modules: first, a Spatio-Temporal Fast Graph Convolutional Network (ST-FGCN) accurately forecasts task demands by capturing complex spatio-temporal correlations. Second, these predictions guide a Prediction-Informed Edge Collaborative Caching (PIECC) algorithm to proactively optimize resource placement across edge servers. Finally, a Genetic Asynchronous Advantage Actor–Critic (GA3C) strategy performs robust task offloading within this optimized environment. Unlike traditional reinforcement learning methods that often struggle with the large state–action spaces in VEC and converge to local optima, our framework simplifies the decision process via predictive caching and enhances exploration with the GA-infused GA3C algorithm. Simulation results demonstrate that our proposed framework significantly reduces long-term system cost, outperforms baseline methods in both latency and energy efficiency, and offers a more adaptive solution for dynamic VEC systems.
{"title":"Cache-assisted task offloading in Vehicular Edge Computing: A spatio-temporal deep reinforcement learning approach","authors":"Xiguang Li , Junlong Li , Yunhe Sun , Ammar Muthanna , Ammar Hawbani , Liang Zhao","doi":"10.1016/j.comcom.2025.108351","DOIUrl":"10.1016/j.comcom.2025.108351","url":null,"abstract":"<div><div>Vehicular Edge Computing (VEC) faces significant challenges in jointly managing caching and task offloading due to dynamic network conditions and resource constraints. This paper proposes a novel framework that addresses these challenges through a synergistic three-stage process. The innovation lies in the tight integration of our modules: first, a Spatio-Temporal Fast Graph Convolutional Network (ST-FGCN) accurately forecasts task demands by capturing complex spatio-temporal correlations. Second, these predictions guide a Prediction-Informed Edge Collaborative Caching (PIECC) algorithm to proactively optimize resource placement across edge servers. Finally, a Genetic Asynchronous Advantage Actor–Critic (GA3C) strategy performs robust task offloading within this optimized environment. Unlike traditional reinforcement learning methods that often struggle with the large state–action spaces in VEC and converge to local optima, our framework simplifies the decision process via predictive caching and enhances exploration with the GA-infused GA3C algorithm. Simulation results demonstrate that our proposed framework significantly reduces long-term system cost, outperforms baseline methods in both latency and energy efficiency, and offers a more adaptive solution for dynamic VEC systems.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"246 ","pages":"Article 108351"},"PeriodicalIF":4.3,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145618643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-11-29DOI: 10.1016/j.comcom.2025.108375
Yangliu Hu , Qian Sun , Tianbin Dang , Guangzhi Wu , Lin Tian
The widespread application of fifth-generation (5G) networks brings great convenience but also poses security threats to the 5G core networks (5GC). To effectively detect and prevent these attacks, deep learning (DL)-based anomaly detection methods are crucial. Existing DL-based methods need anomalous data to train the models. Those open-source anomalous datasets of wired networks cannot accurately represent the characteristics of the 5G protocol, such as sequence and fields, resulting in poor detection performance. To address this issue, this paper constructs a dataset for the 5GC intrusion detection system (5GCID), which consolidates the security vulnerabilities of the radio access network (RAN) domain and the 5GC user domain. These threats are analyzed, guiding the design of illustrative threat scenarios and the bespoke tools. Utilizing our 5G network security experimental platform, we gather protocol data to compile the 5GCID dataset as tools are executed on the platform. Furthermore, it provides two preprocessing methods for the 5GCID dataset, focusing on traffic characteristics and control signal structures, respectively. Additionally, we design an intrusion protection method for 5GC, underpinned by explainable artificial intelligence (XAI), referred to as X-5GCIPS. The proposed X-5GCIPS can be viewed as an application of the 5GCID dataset, promising to advance the field of intelligent cybersecurity measures in next-generation networks.
{"title":"5GCID: Dataset of 5GC intrusion detection system","authors":"Yangliu Hu , Qian Sun , Tianbin Dang , Guangzhi Wu , Lin Tian","doi":"10.1016/j.comcom.2025.108375","DOIUrl":"10.1016/j.comcom.2025.108375","url":null,"abstract":"<div><div>The widespread application of fifth-generation (5G) networks brings great convenience but also poses security threats to the 5G core networks (5GC). To effectively detect and prevent these attacks, deep learning (DL)-based anomaly detection methods are crucial. Existing DL-based methods need anomalous data to train the models. Those open-source anomalous datasets of wired networks cannot accurately represent the characteristics of the 5G protocol, such as sequence and fields, resulting in poor detection performance. To address this issue, this paper constructs a dataset for the 5GC intrusion detection system (5GCID), which consolidates the security vulnerabilities of the radio access network (RAN) domain and the 5GC user domain. These threats are analyzed, guiding the design of illustrative threat scenarios and the bespoke tools. Utilizing our 5G network security experimental platform, we gather protocol data to compile the 5GCID dataset as tools are executed on the platform. Furthermore, it provides two preprocessing methods for the 5GCID dataset, focusing on traffic characteristics and control signal structures, respectively. Additionally, we design an intrusion protection method for 5GC, underpinned by explainable artificial intelligence (XAI), referred to as X-5GCIPS. The proposed X-5GCIPS can be viewed as an application of the 5GCID dataset, promising to advance the field of intelligent cybersecurity measures in next-generation networks.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"247 ","pages":"Article 108375"},"PeriodicalIF":4.3,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145738564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud computing plays a crucial role in the Industry 4.0 era, particularly with the rise of Internet of Things (IoT) applications that support domains such as education, healthcare, business, and manufacturing. These applications consist of multiple services with diverse quality of service (QoS) requirements, making their development and deployment complex. While traditional cloud environments provide scalability, they often fail to support latency-sensitive and resource-intensive applications. To overcome these limitations, alternative paradigms such as Cloud–Fog–Edge (CFE), Cloud–Fog (CF), Cloud–Edge (CE), Fog–Edge (FE), and Mobile Edge Computing (MEC) have emerged. These models push computation, storage, and networking closer to end devices, reducing latency and bandwidth usage. However, the heterogeneity, mobility, and dynamic nature of these environments make service placement (a known NP-hard problem) a central challenge.
This article presents a systematic literature review of service placement approaches across the compute continuum. Following established SLR methodology, we identified and analyzed 124 peer-reviewed studies published between 2018 and 2024, classifying them by (i) deployment environment, (ii) service placement strategies and algorithms, (iii) adaptability of the solution, (iv) optimization objectives, (v) virtualization/orchestration technologies, (vi) evaluation methodologies, including workloads, testbeds, and simulation tools and (vii) use cases or application types.
The novelty of this work lies in providing not only a detailed taxonomy of placement approaches but also this is the first survey that takes all seven aspects into consideration and establishes correlations between them. Our findings reveal that most existing works target smart health applications and favor heuristic-based placement in complex CFE scenarios, while research on scientific and compute-intensive workloads remains limited. We also identify Kubernetes as the most widely used orchestration technology and latency as the dominant optimization metric. Despite significant progress, the field is still maturing, with gaps in real-world validation and adaptive, ML-based placement strategies.
By consolidating technical approaches, evaluation practices, and open challenges, this survey offers both researchers and practitioners a structured overview of the state of the art and guidance for advancing service placement in the compute continuum.
{"title":"Service placement in the continuum: A systematic literature review","authors":"Waseem Sajjad, Montse Farreras, Jordi Garcia, Xavi Masip-Bruin","doi":"10.1016/j.comcom.2025.108370","DOIUrl":"10.1016/j.comcom.2025.108370","url":null,"abstract":"<div><div>Cloud computing plays a crucial role in the Industry 4.0 era, particularly with the rise of Internet of Things (IoT) applications that support domains such as education, healthcare, business, and manufacturing. These applications consist of multiple services with diverse quality of service (QoS) requirements, making their development and deployment complex. While traditional cloud environments provide scalability, they often fail to support latency-sensitive and resource-intensive applications. To overcome these limitations, alternative paradigms such as Cloud–Fog–Edge (CFE), Cloud–Fog (CF), Cloud–Edge (CE), Fog–Edge (FE), and Mobile Edge Computing (MEC) have emerged. These models push computation, storage, and networking closer to end devices, reducing latency and bandwidth usage. However, the heterogeneity, mobility, and dynamic nature of these environments make service placement (a known NP-hard problem) a central challenge.</div><div>This article presents a systematic literature review of service placement approaches across the compute continuum. Following established SLR methodology, we identified and analyzed 124 peer-reviewed studies published between 2018 and 2024, classifying them by (i) deployment environment, (ii) service placement strategies and algorithms, (iii) adaptability of the solution, (iv) optimization objectives, (v) virtualization/orchestration technologies, (vi) evaluation methodologies, including workloads, testbeds, and simulation tools and (vii) use cases or application types.</div><div>The novelty of this work lies in providing not only a detailed taxonomy of placement approaches but also this is the first survey that takes all seven aspects into consideration and establishes correlations between them. Our findings reveal that most existing works target smart health applications and favor heuristic-based placement in complex CFE scenarios, while research on scientific and compute-intensive workloads remains limited. We also identify Kubernetes as the most widely used orchestration technology and latency as the dominant optimization metric. Despite significant progress, the field is still maturing, with gaps in real-world validation and adaptive, ML-based placement strategies.</div><div>By consolidating technical approaches, evaluation practices, and open challenges, this survey offers both researchers and practitioners a structured overview of the state of the art and guidance for advancing service placement in the compute continuum.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"246 ","pages":"Article 108370"},"PeriodicalIF":4.3,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145555380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-11-24DOI: 10.1016/j.comcom.2025.108372
Toi Le-Thanh , Cuong Tran-Minh , Khuong Ho-Van
Wireless communication is quickly degraded due to obstacles in signal propagation. These obstacles can be remedied with intelligent reflecting surface (IRS), which purposely changes propagation conditions. However, security is a big concern in wireless communication, where active eavesdroppers are capable of energy harvesting (EH) and interfere with authorized users. This work analyzes a system model in which a full-duplex (FD) source scavenges energy from the power station and broadcasts a non-orthogonal multiple access (NOMA) signal to a close user and a distant user with the aid of active IRS (mainly reducing double loss due to double reflection) subject to a FD wire-tapper with ability of EH and interfering licensed users. By security analysis, the proposed system (active IRS-aided NOMA with FD EH wire-tapper) is demonstrated better than its counterpart (active IRS-aided orthogonal multiple access with FD EH wire-tapper).
{"title":"Active IRS-aided NOMA with full-duplex energy harvesting wire-tapper: Performance evaluation","authors":"Toi Le-Thanh , Cuong Tran-Minh , Khuong Ho-Van","doi":"10.1016/j.comcom.2025.108372","DOIUrl":"10.1016/j.comcom.2025.108372","url":null,"abstract":"<div><div>Wireless communication is quickly degraded due to obstacles in signal propagation. These obstacles can be remedied with intelligent reflecting surface (IRS), which purposely changes propagation conditions. However, security is a big concern in wireless communication, where active eavesdroppers are capable of energy harvesting (EH) and interfere with authorized users. This work analyzes a system model in which a full-duplex (FD) source scavenges energy from the power station and broadcasts a non-orthogonal multiple access (NOMA) signal to a close user and a distant user with the aid of active IRS (mainly reducing double loss due to double reflection) subject to a FD wire-tapper with ability of EH and interfering licensed users. By security analysis, the proposed system (active IRS-aided NOMA with FD EH wire-tapper) is demonstrated better than its counterpart (active IRS-aided orthogonal multiple access with FD EH wire-tapper).</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"246 ","pages":"Article 108372"},"PeriodicalIF":4.3,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145618717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-12-16DOI: 10.1016/j.comcom.2025.108401
Zongming Wu , Qiang Tang , Jijun Cao , Sihao Wen , Bao Li
For highly dynamic and complex communication networks, existing DRL-based routing optimization solutions suffer from inefficient training, leading to degraded network performance. In this paper, we propose an Intelligent Routing Optimization method with Deep Reinforcement Learning and Betweenness Centrality Theory (IROD-BC). This SDN routing solution based on distributed proximal policy optimization can achieve fast convergence of training and improve the overall performance of the network. First, before training, we select a set of controlled nodes in the network based on the Betweenness Centrality Theory. Second, during training, we adjust the weights of the links in the weighted shortest path algorithm based on this set of controlled nodes to improve the convergence efficiency of distributed proximal policy optimization. The learning agent modifies the weights of the links in the controlled nodes links based on the network traffic state information of this set of controlled nodes to reduce the agent’s dependence on the network topology. We utilize SDN controller to collect network traffic state information including packet loss and latency. Ultimately, the IROD-BC proposed in this paper can learn to make better routing control decisions from its own experience by interacting with the network environment until the learning agent converges and obtains the optimal routing paths. We conducted extensive experiments on three real network topologies to evaluate the performance of IROD-BC. The experimental results show that IROD-BC outperforms existing DRL-based routing solutions and OSPF algorithm in terms of latency, link throughput, and packet loss.
{"title":"Intelligent routing optimization with deep reinforcement learning and Betweenness Centrality Theory in software-defined networks","authors":"Zongming Wu , Qiang Tang , Jijun Cao , Sihao Wen , Bao Li","doi":"10.1016/j.comcom.2025.108401","DOIUrl":"10.1016/j.comcom.2025.108401","url":null,"abstract":"<div><div>For highly dynamic and complex communication networks, existing DRL-based routing optimization solutions suffer from inefficient training, leading to degraded network performance. In this paper, we propose an Intelligent Routing Optimization method with Deep Reinforcement Learning and Betweenness Centrality Theory (IROD-BC). This SDN routing solution based on distributed proximal policy optimization can achieve fast convergence of training and improve the overall performance of the network. First, before training, we select a set of controlled nodes in the network based on the Betweenness Centrality Theory. Second, during training, we adjust the weights of the links in the weighted shortest path algorithm based on this set of controlled nodes to improve the convergence efficiency of distributed proximal policy optimization. The learning agent modifies the weights of the links in the controlled nodes links based on the network traffic state information of this set of controlled nodes to reduce the agent’s dependence on the network topology. We utilize SDN controller to collect network traffic state information including packet loss and latency. Ultimately, the IROD-BC proposed in this paper can learn to make better routing control decisions from its own experience by interacting with the network environment until the learning agent converges and obtains the optimal routing paths. We conducted extensive experiments on three real network topologies to evaluate the performance of IROD-BC. The experimental results show that IROD-BC outperforms existing DRL-based routing solutions and OSPF algorithm in terms of latency, link throughput, and packet loss.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"247 ","pages":"Article 108401"},"PeriodicalIF":4.3,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-12-12DOI: 10.1016/j.comcom.2025.108400
Katarzyna Kosek-Szott , Szymon Szott , Wojciech Ciezobka , Maksymilian Wojnar , Krzysztof Rusek , Jonathan Segev
Indoor positioning is an enabling technology for home, office, and industrial network users because it provides numerous information and communication technology (ICT) and Internet of things (IoT) functionalities such as indoor navigation, smart meter localization, asset tracking, support for emergency services, and detection of hazardous situations. The IEEE 802.11mc fine timing measurement (FTM) protocol (commercially known as Wi-Fi Location) has great potential to enable indoor positioning in future generation devices, primarily because of the high availability of Wi-Fi networks, FTM’s high accuracy and device support. Furthermore, new FTM enhancements are available in the released (802.11az) and recently completed (802.11bk) amendments. Despite the multitude of literature reviews on indoor positioning, a survey dedicated to FTM and its recent enhancements has so far been lacking. We fill this gap by classifying and reviewing over 180 research papers related to the practical accuracy achieved with FTM, methods for improving its accuracy (also with machine learning), combining FTM with other indoor positioning systems, FTM-based applications, and security issues. Based on the conducted survey, we summarize the most important research achievements and formulate open areas for further research.
{"title":"Indoor positioning with Wi-Fi Location: A survey of IEEE 802.11mc/az/bk fine timing measurement research","authors":"Katarzyna Kosek-Szott , Szymon Szott , Wojciech Ciezobka , Maksymilian Wojnar , Krzysztof Rusek , Jonathan Segev","doi":"10.1016/j.comcom.2025.108400","DOIUrl":"10.1016/j.comcom.2025.108400","url":null,"abstract":"<div><div>Indoor positioning is an enabling technology for home, office, and industrial network users because it provides numerous information and communication technology (ICT) and Internet of things (IoT) functionalities such as indoor navigation, smart meter localization, asset tracking, support for emergency services, and detection of hazardous situations. The IEEE 802.11mc fine timing measurement (FTM) protocol (commercially known as Wi-Fi Location) has great potential to enable indoor positioning in future generation devices, primarily because of the high availability of Wi-Fi networks, FTM’s high accuracy and device support. Furthermore, new FTM enhancements are available in the released (802.11az) and recently completed (802.11bk) amendments. Despite the multitude of literature reviews on indoor positioning, a survey dedicated to FTM and its recent enhancements has so far been lacking. We fill this gap by classifying and reviewing over 180 research papers related to the practical accuracy achieved with FTM, methods for improving its accuracy (also with machine learning), combining FTM with other indoor positioning systems, FTM-based applications, and security issues. Based on the conducted survey, we summarize the most important research achievements and formulate open areas for further research.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"247 ","pages":"Article 108400"},"PeriodicalIF":4.3,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-11-28DOI: 10.1016/j.comcom.2025.108378
D. Olliaro , V. Mancuso , P. Castagno , M. Sereno , M. Ajmone Marsan
We study the performance of online games played over a platform that implements gaming as a service (GaaS) in a mobile network slice that hosts concatenated virtual network functions (VNFs) at the edge. The distributed gaming architecture is based on edge computing facilities, whose utilization must be carefully planned and managed, so as to satisfy the stringent performance requirements of game applications. The game manager must consider the latency between players and edge server VNFs, the capacity and load of edge servers, and the latency between edge servers used by interacting players. This calls for a careful choice about the allocation of players to edge server VNFs, aiming at extremely low latency in interactions resulting from player’s commands. We develop an analytical model, which we validate with experiments in the wild, and show that, under several combinations of system parameters, deploying gaming VNFs at the edge can deliver better performance with respect to cloud gaming, in spite of the complexities arising from the distribution of gaming VNFs over edge servers. Our analytical model provides a useful tool for edge gaming systems performance prediction, thus supporting the management of GaaS applications.
{"title":"Performance of distributed multiparty online gaming over edge computing platforms","authors":"D. Olliaro , V. Mancuso , P. Castagno , M. Sereno , M. Ajmone Marsan","doi":"10.1016/j.comcom.2025.108378","DOIUrl":"10.1016/j.comcom.2025.108378","url":null,"abstract":"<div><div>We study the performance of online games played over a platform that implements gaming as a service (GaaS) in a mobile network slice that hosts concatenated virtual network functions (VNFs) at the edge. The distributed gaming architecture is based on edge computing facilities, whose utilization must be carefully planned and managed, so as to satisfy the stringent performance requirements of game applications. The game manager must consider the latency between players and edge server VNFs, the capacity and load of edge servers, and the latency between edge servers used by interacting players. This calls for a careful choice about the allocation of players to edge server VNFs, aiming at extremely low latency in interactions resulting from player’s commands. We develop an analytical model, which we validate with experiments in the wild, and show that, under several combinations of system parameters, deploying gaming VNFs at the edge can deliver better performance with respect to cloud gaming, in spite of the complexities arising from the distribution of gaming VNFs over edge servers. Our analytical model provides a useful tool for edge gaming systems performance prediction, thus supporting the management of GaaS applications.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"246 ","pages":"Article 108378"},"PeriodicalIF":4.3,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145684900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-12-04DOI: 10.1016/j.comcom.2025.108376
Arnau Romero , Carmen Delgado , Jana Baguer , Raúl Suárez , Xavier Costa-Pérez
Cellular-enabled collaborative robots are becoming paramount in Search-and-Rescue (SAR) and emergency response. Crucially dependent on resilient mobile network connectivity, they serve as invaluable assets for tasks like rapid victim localization and the exploration of hazardous, otherwise unreachable areas. However, their reliance on battery power and the need for persistent, low-latency communication limit operational time and mobility. To address this, and considering the evolving capabilities of 5G/6G networks, we propose a novel SAR framework that includes Mission Planning and Mission Execution phases and that optimizes robot deployment. By considering parameters such as the exploration area size, terrain elevation, robot fleet size, communication-influenced energy profiles, desired exploration rate, and target response time, our framework determines the minimum number of robots required and their optimal paths to ensure effective coverage and timely data backhaul over mobile networks. Our results demonstrate the trade-offs between number of robots, explored area, and response time for wheeled and quadruped robots. Further, we quantify the impact of terrain elevation data on mission time and energy consumption, showing the benefits of incorporating real-world environmental factors that might also affect mobile signal propagation and connectivity into SAR planning. This framework provides critical insights for leveraging next-generation mobile networks to enhance autonomous SAR operations.
{"title":"Enhancing cellular-enabled collaborative robots planning through GNSS data for SAR scenarios","authors":"Arnau Romero , Carmen Delgado , Jana Baguer , Raúl Suárez , Xavier Costa-Pérez","doi":"10.1016/j.comcom.2025.108376","DOIUrl":"10.1016/j.comcom.2025.108376","url":null,"abstract":"<div><div>Cellular-enabled collaborative robots are becoming paramount in Search-and-Rescue (SAR) and emergency response. Crucially dependent on resilient mobile network connectivity, they serve as invaluable assets for tasks like rapid victim localization and the exploration of hazardous, otherwise unreachable areas. However, their reliance on battery power and the need for persistent, low-latency communication limit operational time and mobility. To address this, and considering the evolving capabilities of 5G/6G networks, we propose a novel SAR framework that includes Mission Planning and Mission Execution phases and that optimizes robot deployment. By considering parameters such as the exploration area size, terrain elevation, robot fleet size, communication-influenced energy profiles, desired exploration rate, and target response time, our framework determines the minimum number of robots required and their optimal paths to ensure effective coverage and timely data backhaul over mobile networks. Our results demonstrate the trade-offs between number of robots, explored area, and response time for wheeled and quadruped robots. Further, we quantify the impact of terrain elevation data on mission time and energy consumption, showing the benefits of incorporating real-world environmental factors that might also affect mobile signal propagation and connectivity into SAR planning. This framework provides critical insights for leveraging next-generation mobile networks to enhance autonomous SAR operations.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"247 ","pages":"Article 108376"},"PeriodicalIF":4.3,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145738577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}