Pub Date : 2025-11-28DOI: 10.1016/j.comcom.2025.108378
D. Olliaro , V. Mancuso , P. Castagno , M. Sereno , M. Ajmone Marsan
We study the performance of online games played over a platform that implements gaming as a service (GaaS) in a mobile network slice that hosts concatenated virtual network functions (VNFs) at the edge. The distributed gaming architecture is based on edge computing facilities, whose utilization must be carefully planned and managed, so as to satisfy the stringent performance requirements of game applications. The game manager must consider the latency between players and edge server VNFs, the capacity and load of edge servers, and the latency between edge servers used by interacting players. This calls for a careful choice about the allocation of players to edge server VNFs, aiming at extremely low latency in interactions resulting from player’s commands. We develop an analytical model, which we validate with experiments in the wild, and show that, under several combinations of system parameters, deploying gaming VNFs at the edge can deliver better performance with respect to cloud gaming, in spite of the complexities arising from the distribution of gaming VNFs over edge servers. Our analytical model provides a useful tool for edge gaming systems performance prediction, thus supporting the management of GaaS applications.
{"title":"Performance of distributed multiparty online gaming over edge computing platforms","authors":"D. Olliaro , V. Mancuso , P. Castagno , M. Sereno , M. Ajmone Marsan","doi":"10.1016/j.comcom.2025.108378","DOIUrl":"10.1016/j.comcom.2025.108378","url":null,"abstract":"<div><div>We study the performance of online games played over a platform that implements gaming as a service (GaaS) in a mobile network slice that hosts concatenated virtual network functions (VNFs) at the edge. The distributed gaming architecture is based on edge computing facilities, whose utilization must be carefully planned and managed, so as to satisfy the stringent performance requirements of game applications. The game manager must consider the latency between players and edge server VNFs, the capacity and load of edge servers, and the latency between edge servers used by interacting players. This calls for a careful choice about the allocation of players to edge server VNFs, aiming at extremely low latency in interactions resulting from player’s commands. We develop an analytical model, which we validate with experiments in the wild, and show that, under several combinations of system parameters, deploying gaming VNFs at the edge can deliver better performance with respect to cloud gaming, in spite of the complexities arising from the distribution of gaming VNFs over edge servers. Our analytical model provides a useful tool for edge gaming systems performance prediction, thus supporting the management of GaaS applications.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"246 ","pages":"Article 108378"},"PeriodicalIF":4.3,"publicationDate":"2025-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145684900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The integration of artificial intelligence in wireless edge healthcare networks has revolutionized medical imaging, particularly in ultrasound diagnostics, where real-time processing and privacy preservation are paramount. Traditional centralized AI approaches face substantial obstacles in healthcare applications, including communication bottlenecks, privacy violations, and inadequate resource allocation among heterogeneous medical devices. This paper introduces a novel hierarchical Stackelberg game-based collaborative learning framework for ultrasound intelligence in wireless edge healthcare networks (HSGUL), which innovatively applies Stackelberg game mechanisms to ultrasound image analysis tasks. Based on the computational heterogeneity of medical edge devices, our framework establishes dynamic gaming relationships among cloud healthcare platforms, edge medical clusters, and ultrasound diagnostic nodes through a dual-pricing fair incentive process. This creates personalized hierarchical resource allocation strategies that obtain optimal Nash equilibrium solutions for ultrasound model training, effectively guiding edge-based medical AI models toward positive acceleration. The framework operates through a three-phase Stackelberg game mechanism coordinating resource allocation and incentive distribution across the healthcare network hierarchy. Experimental validation on cardiac, abdominal, and thyroid ultrasound datasets demonstrates superior performance compared to established baseline methods. HSGUL achieves 94.73 % accuracy on cardiac ultrasound classification while reducing communication overhead by 47 % compared to centralized approaches, maintaining patient data privacy through localized edge processing and enabling scalable deployment across diverse healthcare institutions with varying computational capabilities.
{"title":"Hierarchical Stackelberg game-based collaborative learning for ultrasound intelligence in wireless edge healthcare networks","authors":"Fei Chen , Shalli Rani , Byung-Gyu Kim , Shakila Basheer , Huamao Jiang","doi":"10.1016/j.comcom.2025.108377","DOIUrl":"10.1016/j.comcom.2025.108377","url":null,"abstract":"<div><div>The integration of artificial intelligence in wireless edge healthcare networks has revolutionized medical imaging, particularly in ultrasound diagnostics, where real-time processing and privacy preservation are paramount. Traditional centralized AI approaches face substantial obstacles in healthcare applications, including communication bottlenecks, privacy violations, and inadequate resource allocation among heterogeneous medical devices. This paper introduces a novel hierarchical Stackelberg game-based collaborative learning framework for ultrasound intelligence in wireless edge healthcare networks (HSGUL), which innovatively applies Stackelberg game mechanisms to ultrasound image analysis tasks. Based on the computational heterogeneity of medical edge devices, our framework establishes dynamic gaming relationships among cloud healthcare platforms, edge medical clusters, and ultrasound diagnostic nodes through a dual-pricing fair incentive process. This creates personalized hierarchical resource allocation strategies that obtain optimal Nash equilibrium solutions for ultrasound model training, effectively guiding edge-based medical AI models toward positive acceleration. The framework operates through a three-phase Stackelberg game mechanism coordinating resource allocation and incentive distribution across the healthcare network hierarchy. Experimental validation on cardiac, abdominal, and thyroid ultrasound datasets demonstrates superior performance compared to established baseline methods. HSGUL achieves 94.73 % accuracy on cardiac ultrasound classification while reducing communication overhead by 47 % compared to centralized approaches, maintaining patient data privacy through localized edge processing and enabling scalable deployment across diverse healthcare institutions with varying computational capabilities.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"247 ","pages":"Article 108377"},"PeriodicalIF":4.3,"publicationDate":"2025-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145658888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-24DOI: 10.1016/j.comcom.2025.108372
Toi Le-Thanh , Cuong Tran-Minh , Khuong Ho-Van
Wireless communication is quickly degraded due to obstacles in signal propagation. These obstacles can be remedied with intelligent reflecting surface (IRS), which purposely changes propagation conditions. However, security is a big concern in wireless communication, where active eavesdroppers are capable of energy harvesting (EH) and interfere with authorized users. This work analyzes a system model in which a full-duplex (FD) source scavenges energy from the power station and broadcasts a non-orthogonal multiple access (NOMA) signal to a close user and a distant user with the aid of active IRS (mainly reducing double loss due to double reflection) subject to a FD wire-tapper with ability of EH and interfering licensed users. By security analysis, the proposed system (active IRS-aided NOMA with FD EH wire-tapper) is demonstrated better than its counterpart (active IRS-aided orthogonal multiple access with FD EH wire-tapper).
{"title":"Active IRS-aided NOMA with full-duplex energy harvesting wire-tapper: Performance evaluation","authors":"Toi Le-Thanh , Cuong Tran-Minh , Khuong Ho-Van","doi":"10.1016/j.comcom.2025.108372","DOIUrl":"10.1016/j.comcom.2025.108372","url":null,"abstract":"<div><div>Wireless communication is quickly degraded due to obstacles in signal propagation. These obstacles can be remedied with intelligent reflecting surface (IRS), which purposely changes propagation conditions. However, security is a big concern in wireless communication, where active eavesdroppers are capable of energy harvesting (EH) and interfere with authorized users. This work analyzes a system model in which a full-duplex (FD) source scavenges energy from the power station and broadcasts a non-orthogonal multiple access (NOMA) signal to a close user and a distant user with the aid of active IRS (mainly reducing double loss due to double reflection) subject to a FD wire-tapper with ability of EH and interfering licensed users. By security analysis, the proposed system (active IRS-aided NOMA with FD EH wire-tapper) is demonstrated better than its counterpart (active IRS-aided orthogonal multiple access with FD EH wire-tapper).</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"246 ","pages":"Article 108372"},"PeriodicalIF":4.3,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145618717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-22DOI: 10.1016/j.comcom.2025.108371
Lina Magoula, Nikolaos Koursioumpas, Ioannis Stavrakakis, Nancy Alonistioti
As we progress toward a new era of Artificial Intelligence (AI)-enabled wireless networks, the focus shifts to deploying distributed intelligence to enhance network automation, scalability, and responsiveness. Despite its merits, it often leads to resource-intensive deployments, which raise energy concerns. These concerns are further amplified by the limited availability of resource orchestration strategies capable of addressing the multi-faceted nature of distributed AI. This work targets energy consumption minimization of distributed AI services by proposing a custom meta-heuristic, two-tier hierarchical genetic algorithm (HGA) that integrates a divide-and-conquer strategy to provide effective chained decision-making. The first tier of HGA determines the optimal placement of model partitions within an AI service on the underlying network, while the second tier focuses on strategic resource allocation for each partition, ensuring that service latency requirements are met. A safe strategy selection is proposed, applying a custom repair mechanism and a penalty function that discourages constraints violation. Evaluation results show the effectiveness and robustness of the proposed HGA, compared to two state-of-the-art baseline solutions, on different network environments and evaluation scenarios. HGA achieves up to 94.1% decrease in the total energy consumption per service compared to the baselines, while entirely eliminating infeasible strategies.
{"title":"E-SPLIT: A hierarchical genetic algorithm for energy-efficient distributed AI services","authors":"Lina Magoula, Nikolaos Koursioumpas, Ioannis Stavrakakis, Nancy Alonistioti","doi":"10.1016/j.comcom.2025.108371","DOIUrl":"10.1016/j.comcom.2025.108371","url":null,"abstract":"<div><div>As we progress toward a new era of Artificial Intelligence (AI)-enabled wireless networks, the focus shifts to deploying distributed intelligence to enhance network automation, scalability, and responsiveness. Despite its merits, it often leads to resource-intensive deployments, which raise energy concerns. These concerns are further amplified by the limited availability of resource orchestration strategies capable of addressing the multi-faceted nature of distributed AI. This work targets energy consumption minimization of distributed AI services by proposing a custom meta-heuristic, two-tier hierarchical genetic algorithm (HGA) that integrates a divide-and-conquer strategy to provide effective chained decision-making. The first tier of HGA determines the optimal placement of model partitions within an AI service on the underlying network, while the second tier focuses on strategic resource allocation for each partition, ensuring that service latency requirements are met. A safe strategy selection is proposed, applying a custom repair mechanism and a penalty function that discourages constraints violation. Evaluation results show the effectiveness and robustness of the proposed HGA, compared to two state-of-the-art baseline solutions, on different network environments and evaluation scenarios. HGA achieves up to 94.1% decrease in the total energy consumption per service compared to the baselines, while entirely eliminating infeasible strategies.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"246 ","pages":"Article 108371"},"PeriodicalIF":4.3,"publicationDate":"2025-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145618716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mobile Extended Reality (XR) communication services offer unparalleled opportunities across various verticals, but present specific challenges due to their high throughput requirements, often in the hundreds of Mbps, and stringent end-to-end delay constraints, usually as low as a few milliseconds. To address these challenges, we propose the eXtended Reality-Oriented Orchestration of Access Resources (X-OAR), designed to support XR applications, by leveraging next-generation network access technologies, even in densely populated environments. X-OAR integrates cooperatively scheduled radio access network resources with edge computing capabilities. X-OAR complies with the stringent delay requirements defined by 3GPP for XR quality of experience through cooperative joint edge and radio resources scheduling. We formulate the delay minimization problem using a graph-based approach and introduce a greedy algorithm that reduces orchestration complexity and prior knowledge of user subscription data. Numerical simulations demonstrate that X-OAR’s cooperative scheduling outperforms state-of-the-art solutions, delivering superior XR quality of experience. Furthermore, X-OAR paves the way for future research on extending orchestration to application-layer strategies and resource-aware charging policies.
{"title":"Cooperative access resource orchestration for extended reality services in spatially dense scenarios","authors":"Alessandro Priviero , Luca Mastrandrea , Ioannis Chatzigiannakis , Stefania Colonnese","doi":"10.1016/j.comcom.2025.108368","DOIUrl":"10.1016/j.comcom.2025.108368","url":null,"abstract":"<div><div>Mobile Extended Reality (XR) communication services offer unparalleled opportunities across various verticals, but present specific challenges due to their high throughput requirements, often in the hundreds of Mbps, and stringent end-to-end delay constraints, usually as low as a few milliseconds. To address these challenges, we propose the eXtended Reality-Oriented Orchestration of Access Resources (X-OAR), designed to support XR applications, by leveraging next-generation network access technologies, even in densely populated environments. X-OAR integrates cooperatively scheduled radio access network resources with edge computing capabilities. X-OAR complies with the stringent delay requirements defined by 3GPP for XR quality of experience through cooperative joint edge and radio resources scheduling. We formulate the delay minimization problem using a graph-based approach and introduce a greedy algorithm that reduces orchestration complexity and prior knowledge of user subscription data. Numerical simulations demonstrate that X-OAR’s cooperative scheduling outperforms state-of-the-art solutions, delivering superior XR quality of experience. Furthermore, X-OAR paves the way for future research on extending orchestration to application-layer strategies and resource-aware charging policies.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"246 ","pages":"Article 108368"},"PeriodicalIF":4.3,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145555382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-19DOI: 10.1016/j.comcom.2025.108367
Daniel Vogel , Felix Viola , Nicholas Malte Kreimeyer , Daniel Bücheler , Sebastian Böhm , Michael Meier
MAC address randomization in Wi-Fi Probe Requests (PRs) is supposed to ensure unlinkability for improving user privacy, but PRs still contain enough information to track or re-identify users without relying on the MAC address. This poses a risk to privacy but may also assist law enforcement in identifying devices present at a crime scene. We examine whether it is possible to separate Wi-Fi devices based on observed PRs with randomized MAC addresses using only the content and structure of these PRs. Previous work has predominantly focused on techniques for device counting using feature reduction as a means to manage data set complexity, and has failed to achieve sufficient accuracy for individual device identification. We propose an approach that leverages templates reflecting a PR’s structure to identify its influence based on vendor and device, allowing the use of more complex fingerprinting algorithms that utilize the full set of available features. To that end we examine differences between vendors and devices based on observation length, used information elements, and overlapping Fingerprints (FPs) while ignoring MAC addresses. Using PR templating, we construct a knowledge base of FPs and templates from not only existing public data sets but also a new data set published for future research. Our data set tackles critical challenges in labelling and quality in currently available data sets, and we introduce a streamlined and comprehensible crowdsourcing process including automated measurements to enable other researchers to contribute to our data set. We evaluate our device identification approach on the currently available data and demonstrate that, depending on the data set, between 75 % and 85 % of Wi-Fi devices can be uniquely separated within the anonymity group of devices contributing PRs to the respective data set.
{"title":"I still know you were here: Leveraging probe request templates for identifying Wi-Fi devices at scale","authors":"Daniel Vogel , Felix Viola , Nicholas Malte Kreimeyer , Daniel Bücheler , Sebastian Böhm , Michael Meier","doi":"10.1016/j.comcom.2025.108367","DOIUrl":"10.1016/j.comcom.2025.108367","url":null,"abstract":"<div><div>MAC address randomization in Wi-Fi Probe Requests (PRs) is supposed to ensure unlinkability for improving user privacy, but PRs still contain enough information to track or re-identify users without relying on the MAC address. This poses a risk to privacy but may also assist law enforcement in identifying devices present at a crime scene. We examine whether it is possible to separate Wi-Fi devices based on observed PRs with randomized MAC addresses using only the content and structure of these PRs. Previous work has predominantly focused on techniques for device counting using feature reduction as a means to manage data set complexity, and has failed to achieve sufficient accuracy for individual device identification. We propose an approach that leverages templates reflecting a PR’s structure to identify its influence based on vendor and device, allowing the use of more complex fingerprinting algorithms that utilize the full set of available features. To that end we examine differences between vendors and devices based on observation length, used information elements, and overlapping Fingerprints (FPs) while ignoring MAC addresses. Using PR templating, we construct a knowledge base of FPs and templates from not only existing public data sets but also a new data set published for future research. Our data set tackles critical challenges in labelling and quality in currently available data sets, and we introduce a streamlined and comprehensible crowdsourcing process including automated measurements to enable other researchers to contribute to our data set. We evaluate our device identification approach on the currently available data and demonstrate that, depending on the data set, between 75<!--> <!-->% and 85<!--> <!-->% of Wi-Fi devices can be uniquely separated within the anonymity group of devices contributing PRs to the respective data set.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"246 ","pages":"Article 108367"},"PeriodicalIF":4.3,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145618718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-19DOI: 10.1016/j.comcom.2025.108366
Poonam Maurya, Alejandro Ramírez-Arroyo, Troels Bundgaard Sørensen, Sebastian Bro Damsgaard
Emerging domains such as wireless industrial control, vehicular communications, smart grids, and augmented reality demand low latency, high throughput, and high reliability from wireless communication systems. Unfortunately, single connectivity (SC) communications frequently fail to fulfill these stringent requirements. To address these challenges, employing a multi-connectivity (MC) solution appears to be a promising technique. In this paper, in the context of Horizon Europe COMMECT project, we seek to develop a multi-connectivity solution that intelligently integrates cellular and satellite networks for the purpose of monitoring livestock transport in rural regions where 5G coverage is limited. Multi-connectivity can be helpful for meeting EU regulations requiring seamless communication between transport units and the operational center to ensure animal welfare during transit. To achieve this, we employ machine learning (ML) models within a Classification and Regression framework in the proposed multi-connectivity solution. The ML models process radio-related key performance indicators (KPIs) as inputs to estimate network throughput and latency. The outputs of the model are used to decide whether to continue with the cellular link or activate the backup satellite link in the multi-connectivity setup, ensuring an almost uninterrupted connection. This capability is particularly crucial in regions where 5G coverage is limited, and maintaining a reliable connection is essential. To evaluate the proposed framework, we used a hybrid emulation setup based on experimental data collected in the northern part of Denmark. The emulation results demonstrate that the MC solution significantly outperforms the cellular SC. Although our solution is designed for livestock transport monitoring, it can be adapted for other applications, such as precision farming, in areas with insufficient 5G availability.
无线工业控制、车载通信、智能电网和增强现实等新兴领域需要无线通信系统的低延迟、高吞吐量和高可靠性。不幸的是,单连接(SC)通信经常不能满足这些严格的要求。为了应对这些挑战,采用多连接(MC)解决方案似乎是一种很有前途的技术。在本文中,在Horizon Europe COMMECT项目的背景下,我们寻求开发一种多连接解决方案,该解决方案可以智能地集成蜂窝和卫星网络,用于监测5G覆盖范围有限的农村地区的牲畜运输。多连接有助于满足欧盟法规要求,运输单位和运营中心之间需要无缝通信,以确保运输过程中的动物福利。为了实现这一点,我们在提出的多连接解决方案中使用了分类和回归框架内的机器学习(ML)模型。机器学习模型处理与无线电相关的关键性能指标(kpi)作为输入,以估计网络吞吐量和延迟。该模型的输出用于决定在多连接设置中是继续蜂窝链路还是激活备用卫星链路,以确保几乎不间断的连接。这种能力在5G覆盖有限的地区尤为重要,保持可靠的连接至关重要。为了评估提出的框架,我们使用了基于在丹麦北部收集的实验数据的混合仿真设置。仿真结果表明,MC解决方案明显优于蜂窝SC。尽管我们的解决方案是为牲畜运输监控而设计的,但它可以适用于5G可用性不足地区的其他应用,例如精准农业。
{"title":"Machine learning-driven cellular–satellite multi-connectivity for monitoring livestock transport in rural areas","authors":"Poonam Maurya, Alejandro Ramírez-Arroyo, Troels Bundgaard Sørensen, Sebastian Bro Damsgaard","doi":"10.1016/j.comcom.2025.108366","DOIUrl":"10.1016/j.comcom.2025.108366","url":null,"abstract":"<div><div>Emerging domains such as wireless industrial control, vehicular communications, smart grids, and augmented reality demand low latency, high throughput, and high reliability from wireless communication systems. Unfortunately, single connectivity (SC) communications frequently fail to fulfill these stringent requirements. To address these challenges, employing a multi-connectivity (MC) solution appears to be a promising technique. In this paper, in the context of Horizon Europe COMMECT project, we seek to develop a multi-connectivity solution that intelligently integrates cellular and satellite networks for the purpose of monitoring livestock transport in rural regions where 5G coverage is limited. Multi-connectivity can be helpful for meeting EU regulations requiring seamless communication between transport units and the operational center to ensure animal welfare during transit. To achieve this, we employ machine learning (ML) models within a Classification and Regression framework in the proposed multi-connectivity solution. The ML models process radio-related key performance indicators (KPIs) as inputs to estimate network throughput and latency. The outputs of the model are used to decide whether to continue with the cellular link or activate the backup satellite link in the multi-connectivity setup, ensuring an almost uninterrupted connection. This capability is particularly crucial in regions where 5G coverage is limited, and maintaining a reliable connection is essential. To evaluate the proposed framework, we used a hybrid emulation setup based on experimental data collected in the northern part of Denmark. The emulation results demonstrate that the MC solution significantly outperforms the cellular SC. Although our solution is designed for livestock transport monitoring, it can be adapted for other applications, such as precision farming, in areas with insufficient 5G availability.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"246 ","pages":"Article 108366"},"PeriodicalIF":4.3,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145571962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-19DOI: 10.1016/j.comcom.2025.108369
Nguyen Thi Dieu Linh , Thanh-Trung Nguyen
The rapid adoption of the Matter protocol as a global standard for smart home appliances presents a significant opportunity to enhance the interoperability of IoT devices. Concurrently, FIWARE, a widely adopted smart city platform, facilitates seamless integration and management of diverse IoT devices through standardized data models and context management capabilities. However, integrating Matter-enabled devices with FIWARE remains a challenge due to differences in protocol architectures and data representations. This paper proposes a novel framework to bridge the Matter protocol and the FIWARE platform, enabling interworking between smart home appliances and smart city services. The proposed solution includes: (1) a data model and adaptive translation algorithm for mapping Matter device specifications to FIWARE Smart Data Models, (2) an IoT Agent to interface Matter devices with the FIWARE platform as FIWARE resources, and (3) a comprehensive evaluation using commercial and experimental Matter-enabled devices. This approach ensures seamless data exchange, enhances cross-domain interoperability, and supports the integration of Matter devices into smart city contexts. The results demonstrate the feasibility and efficiency of the integration, offering a scalable framework for leveraging low-cost Matter devices as assets within smart city platforms. This work contributes to advancing IoT interoperability and paves the way for innovative applications in smart homes and smart cities.
{"title":"FIWARE IoT agent for Matter: Toward the integration of smart home devices into the FIWARE smart city platform","authors":"Nguyen Thi Dieu Linh , Thanh-Trung Nguyen","doi":"10.1016/j.comcom.2025.108369","DOIUrl":"10.1016/j.comcom.2025.108369","url":null,"abstract":"<div><div>The rapid adoption of the Matter protocol as a global standard for smart home appliances presents a significant opportunity to enhance the interoperability of IoT devices. Concurrently, FIWARE, a widely adopted smart city platform, facilitates seamless integration and management of diverse IoT devices through standardized data models and context management capabilities. However, integrating Matter-enabled devices with FIWARE remains a challenge due to differences in protocol architectures and data representations. This paper proposes a novel framework to bridge the Matter protocol and the FIWARE platform, enabling interworking between smart home appliances and smart city services. The proposed solution includes: (1) a data model and adaptive translation algorithm for mapping Matter device specifications to FIWARE Smart Data Models, (2) an IoT Agent to interface Matter devices with the FIWARE platform as FIWARE resources, and (3) a comprehensive evaluation using commercial and experimental Matter-enabled devices. This approach ensures seamless data exchange, enhances cross-domain interoperability, and supports the integration of Matter devices into smart city contexts. The results demonstrate the feasibility and efficiency of the integration, offering a scalable framework for leveraging low-cost Matter devices as assets within smart city platforms. This work contributes to advancing IoT interoperability and paves the way for innovative applications in smart homes and smart cities.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"245 ","pages":"Article 108369"},"PeriodicalIF":4.3,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145580409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-19DOI: 10.1016/j.comcom.2025.108364
Jiangbo Tang
The proliferation of Internet of Everything (IoE) applications in 6G-enabled smart edge environments has intensified the demand for resource-efficient, latency-aware, and adaptive scheduling mechanisms. Conventional static or fairness-driven algorithms often fail to meet the scalability and responsiveness requirements of resource-constrained infrastructures. To address this challenge, we propose a novel hybrid scheduling framework, termed EOTSA (Energy-Optimized Task Scheduling and Allocation), which synergistically integrates Deep Q-Network (DQN)-based reinforcement learning with Particle Swarm Optimization (PSO). In this hybrid paradigm, DQN serves as the primary decision-maker, learning long-term optimal policies for dynamic selection among three specialized scheduling strategies Dynamic Priority Scheduling (DPS), Energy-Aware Fair Scheduling (EAFS), and Latency-Sensitive Adaptive Scheduling (LSAS) by modeling the task offloading process as a Markov Decision Process and maximizing a multi-objective reward function encompassing latency, energy, and QoS. However, DQN suffers from slow convergence and susceptibility to local optima in high-dimensional, dynamic 6G environments. PSO complements DQN by enhancing exploration efficiency: it generates a population of candidate task-device assignments, refines them iteratively using velocity-position updates, and provides high-quality initial actions to the DQN agent, accelerating convergence and escaping local optima. Individually, DQN excels in sequential decision-making under uncertainty, while PSO is superior in parallel global search for continuous optimization spaces. Together, the hybrid framework resolves the limitations of pure DRL and pure swarm intelligence, achieving robust, real-time adaptability in heterogeneous workloads. Simulation results benchmarked against RR, SC, MR, PF, PB, DRA, and ATO demonstrate that EOTSA achieves up to 30 % energy savings, a 25 % reduction in task completion time, and a 20 % improvement in Quality of Service (QoS). These results highlight EOTSA's superior adaptability across heterogeneous execution scenarios, positioning it as a scalable and sustainable solution for intelligent edge computing in forthcoming 6G-based IoE networks.
{"title":"Deep-reinforcement-learning–guided resource allocation and task offloading for 6G edge intelligence","authors":"Jiangbo Tang","doi":"10.1016/j.comcom.2025.108364","DOIUrl":"10.1016/j.comcom.2025.108364","url":null,"abstract":"<div><div>The proliferation of Internet of Everything (IoE) applications in 6G-enabled smart edge environments has intensified the demand for resource-efficient, latency-aware, and adaptive scheduling mechanisms. Conventional static or fairness-driven algorithms often fail to meet the scalability and responsiveness requirements of resource-constrained infrastructures. To address this challenge, we propose a novel hybrid scheduling framework, termed EOTSA (Energy-Optimized Task Scheduling and Allocation), which synergistically integrates Deep Q-Network (DQN)-based reinforcement learning with Particle Swarm Optimization (PSO). In this hybrid paradigm, DQN serves as the primary decision-maker, learning long-term optimal policies for dynamic selection among three specialized scheduling strategies Dynamic Priority Scheduling (DPS), Energy-Aware Fair Scheduling (EAFS), and Latency-Sensitive Adaptive Scheduling (LSAS) by modeling the task offloading process as a Markov Decision Process and maximizing a multi-objective reward function encompassing latency, energy, and QoS. However, DQN suffers from slow convergence and susceptibility to local optima in high-dimensional, dynamic 6G environments. PSO complements DQN by enhancing exploration efficiency: it generates a population of candidate task-device assignments, refines them iteratively using velocity-position updates, and provides high-quality initial actions to the DQN agent, accelerating convergence and escaping local optima. Individually, DQN excels in sequential decision-making under uncertainty, while PSO is superior in parallel global search for continuous optimization spaces. Together, the hybrid framework resolves the limitations of pure DRL and pure swarm intelligence, achieving robust, real-time adaptability in heterogeneous workloads. Simulation results benchmarked against RR, SC, MR, PF, PB, DRA, and ATO demonstrate that EOTSA achieves up to 30 % energy savings, a 25 % reduction in task completion time, and a 20 % improvement in Quality of Service (QoS). These results highlight EOTSA's superior adaptability across heterogeneous execution scenarios, positioning it as a scalable and sustainable solution for intelligent edge computing in forthcoming 6G-based IoE networks.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"245 ","pages":"Article 108364"},"PeriodicalIF":4.3,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145580322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud computing plays a crucial role in the Industry 4.0 era, particularly with the rise of Internet of Things (IoT) applications that support domains such as education, healthcare, business, and manufacturing. These applications consist of multiple services with diverse quality of service (QoS) requirements, making their development and deployment complex. While traditional cloud environments provide scalability, they often fail to support latency-sensitive and resource-intensive applications. To overcome these limitations, alternative paradigms such as Cloud–Fog–Edge (CFE), Cloud–Fog (CF), Cloud–Edge (CE), Fog–Edge (FE), and Mobile Edge Computing (MEC) have emerged. These models push computation, storage, and networking closer to end devices, reducing latency and bandwidth usage. However, the heterogeneity, mobility, and dynamic nature of these environments make service placement (a known NP-hard problem) a central challenge.
This article presents a systematic literature review of service placement approaches across the compute continuum. Following established SLR methodology, we identified and analyzed 124 peer-reviewed studies published between 2018 and 2024, classifying them by (i) deployment environment, (ii) service placement strategies and algorithms, (iii) adaptability of the solution, (iv) optimization objectives, (v) virtualization/orchestration technologies, (vi) evaluation methodologies, including workloads, testbeds, and simulation tools and (vii) use cases or application types.
The novelty of this work lies in providing not only a detailed taxonomy of placement approaches but also this is the first survey that takes all seven aspects into consideration and establishes correlations between them. Our findings reveal that most existing works target smart health applications and favor heuristic-based placement in complex CFE scenarios, while research on scientific and compute-intensive workloads remains limited. We also identify Kubernetes as the most widely used orchestration technology and latency as the dominant optimization metric. Despite significant progress, the field is still maturing, with gaps in real-world validation and adaptive, ML-based placement strategies.
By consolidating technical approaches, evaluation practices, and open challenges, this survey offers both researchers and practitioners a structured overview of the state of the art and guidance for advancing service placement in the compute continuum.
{"title":"Service placement in the continuum: A systematic literature review","authors":"Waseem Sajjad, Montse Farreras, Jordi Garcia, Xavi Masip-Bruin","doi":"10.1016/j.comcom.2025.108370","DOIUrl":"10.1016/j.comcom.2025.108370","url":null,"abstract":"<div><div>Cloud computing plays a crucial role in the Industry 4.0 era, particularly with the rise of Internet of Things (IoT) applications that support domains such as education, healthcare, business, and manufacturing. These applications consist of multiple services with diverse quality of service (QoS) requirements, making their development and deployment complex. While traditional cloud environments provide scalability, they often fail to support latency-sensitive and resource-intensive applications. To overcome these limitations, alternative paradigms such as Cloud–Fog–Edge (CFE), Cloud–Fog (CF), Cloud–Edge (CE), Fog–Edge (FE), and Mobile Edge Computing (MEC) have emerged. These models push computation, storage, and networking closer to end devices, reducing latency and bandwidth usage. However, the heterogeneity, mobility, and dynamic nature of these environments make service placement (a known NP-hard problem) a central challenge.</div><div>This article presents a systematic literature review of service placement approaches across the compute continuum. Following established SLR methodology, we identified and analyzed 124 peer-reviewed studies published between 2018 and 2024, classifying them by (i) deployment environment, (ii) service placement strategies and algorithms, (iii) adaptability of the solution, (iv) optimization objectives, (v) virtualization/orchestration technologies, (vi) evaluation methodologies, including workloads, testbeds, and simulation tools and (vii) use cases or application types.</div><div>The novelty of this work lies in providing not only a detailed taxonomy of placement approaches but also this is the first survey that takes all seven aspects into consideration and establishes correlations between them. Our findings reveal that most existing works target smart health applications and favor heuristic-based placement in complex CFE scenarios, while research on scientific and compute-intensive workloads remains limited. We also identify Kubernetes as the most widely used orchestration technology and latency as the dominant optimization metric. Despite significant progress, the field is still maturing, with gaps in real-world validation and adaptive, ML-based placement strategies.</div><div>By consolidating technical approaches, evaluation practices, and open challenges, this survey offers both researchers and practitioners a structured overview of the state of the art and guidance for advancing service placement in the compute continuum.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"246 ","pages":"Article 108370"},"PeriodicalIF":4.3,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145555380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}