Pub Date : 2025-08-01Epub Date: 2025-06-16DOI: 10.1016/j.pmcj.2025.102082
Samuel Pélissier , Abhishek Kumar Mishra , Mathieu Cunche , Vincent Roca , Didier Donsez
LoRaWAN is a leading IoT technology worldwide, increasingly integrated into pervasive computing environments through a growing number of sensors in various industrial and consumer applications. Although its security vulnerabilities have been extensively explored in the recent literature, its ties to human activities warrant further privacy research. Existing device identification and activity inference attacks are only effective with a stable identifier. We find that the identifiers in LoRaWAN exhibit high variability, and more than half of the devices use them for less than a week. For the first time in the literature, we explore the feasibility of device fingerprinting in LoRaWAN, allowing long-term device linkage, i.e. associating various identifiers of the same device. We introduce a novel holistic fingerprint representation utilizing multiple domains, namely content, timing, and radio information, and present a machine learning-based solution for linking identifiers. Through a large-scale experimental evaluation based on real-world datasets containing up to 41 million messages, we study multiple scenarios, including an attacker with limited resources. We reach 0.98 linkage accuracy, underscoring the need for privacy-preserving measures. We showcase countermeasures including payload padding, random delays, and radio signal modulation, and conclude by assessing their impact on our fingerprinting solution.
{"title":"Efficiently linking LoRaWAN identifiers through multi-domain fingerprinting","authors":"Samuel Pélissier , Abhishek Kumar Mishra , Mathieu Cunche , Vincent Roca , Didier Donsez","doi":"10.1016/j.pmcj.2025.102082","DOIUrl":"10.1016/j.pmcj.2025.102082","url":null,"abstract":"<div><div>LoRaWAN is a leading IoT technology worldwide, increasingly integrated into pervasive computing environments through a growing number of sensors in various industrial and consumer applications. Although its security vulnerabilities have been extensively explored in the recent literature, its ties to human activities warrant further privacy research. Existing device identification and activity inference attacks are only effective with a stable identifier. We find that the identifiers in LoRaWAN exhibit high variability, and more than half of the devices use them for less than a week. For the first time in the literature, we explore the feasibility of device fingerprinting in LoRaWAN, allowing long-term device linkage, i.e. associating various identifiers of the same device. We introduce a novel holistic fingerprint representation utilizing multiple domains, namely content, timing, and radio information, and present a machine learning-based solution for linking identifiers. Through a large-scale experimental evaluation based on real-world datasets containing up to 41 million messages, we study multiple scenarios, including an attacker with limited resources. We reach 0.98 linkage accuracy, underscoring the need for privacy-preserving measures. We showcase countermeasures including payload padding, random delays, and radio signal modulation, and conclude by assessing their impact on our fingerprinting solution.</div></div>","PeriodicalId":49005,"journal":{"name":"Pervasive and Mobile Computing","volume":"112 ","pages":"Article 102082"},"PeriodicalIF":3.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144313756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01Epub Date: 2025-06-09DOI: 10.1016/j.pmcj.2025.102078
Mahdi Nikooghadam, Hamid Reza Shahriari
The concept of crowdsourcing uses shared intelligence to solve complex tasks through group collaboration. Crowdsourcing involves gathering information and opinions from participants who submit their data, or solutions, over the Internet using a specific program. Given that the communication environment for crowdsourcing platforms is the Internet, there is a significant opportunity for attackers to compromise the confidentiality and integrity of information and violate participants’ privacy. Despite the great benefits of crowdsourcing, concerns about security and privacy are growing and require attention. Unfortunately based on our knowledge, the schemes presented to preserve security and privacy in crowdsourcing are susceptible to security and privacy attack and have a high computational and communication overhead. Therefore, they are not appropriate for crowdsourcing environments. This paper presents an ultra-lightweight authentication and key establishment protocol based on hash functions. This protocol meets all security requirements, is invulnerable to known attacks, and imposes a very low network overhead. The security of the proposed scheme has been formally proved, depicting the resistance of the proposed scheme to different types of possible attacks. In addition, the robustness of the proposed scheme against potential attacks has been proven through Scyther’s automatic software validation tool. The performance evaluation ultimately demonstrated that the proposed protocol incurs significantly reduced computational and communication costs compared to previous schemes and is very suitable for the crowdsourcing environment.
{"title":"Lightweight secure key establishment to create a secure channel between entities in a crowdsourcing environment","authors":"Mahdi Nikooghadam, Hamid Reza Shahriari","doi":"10.1016/j.pmcj.2025.102078","DOIUrl":"10.1016/j.pmcj.2025.102078","url":null,"abstract":"<div><div>The concept of crowdsourcing uses shared intelligence to solve complex tasks through group collaboration. Crowdsourcing involves gathering information and opinions from participants who submit their data, or solutions, over the Internet using a specific program. Given that the communication environment for crowdsourcing platforms is the Internet, there is a significant opportunity for attackers to compromise the confidentiality and integrity of information and violate participants’ privacy. Despite the great benefits of crowdsourcing, concerns about security and privacy are growing and require attention. Unfortunately based on our knowledge, the schemes presented to preserve security and privacy in crowdsourcing are susceptible to security and privacy attack and have a high computational and communication overhead. Therefore, they are not appropriate for crowdsourcing environments. This paper presents an ultra-lightweight authentication and key establishment protocol based on hash functions. This protocol meets all security requirements, is invulnerable to known attacks, and imposes a very low network overhead. The security of the proposed scheme has been formally proved, depicting the resistance of the proposed scheme to different types of possible attacks. In addition, the robustness of the proposed scheme against potential attacks has been proven through Scyther’s automatic software validation tool. The performance evaluation ultimately demonstrated that the proposed protocol incurs significantly reduced computational and communication costs compared to previous schemes and is very suitable for the crowdsourcing environment.</div></div>","PeriodicalId":49005,"journal":{"name":"Pervasive and Mobile Computing","volume":"112 ","pages":"Article 102078"},"PeriodicalIF":3.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144262812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Internet of Vehicles has evolved significantly with the integration of intelligent technologies, transforming vehicular networks by enhancing communication, resource management, and decision-making at the network’s edge. With the increasing complexity of vehicular environments and data demands, efficient caching mechanisms have become essential to ensure seamless service delivery and optimized resource usage. In this paper, we present LF-MARLEC, a Leader Follower Multi-Agent Reinforcement Learning solution for Edge Caching within the Internet of Vehicles. Our approach introduces a hierarchical distribution of action importance, enabling more effective decision-making at the network edge. Extensive experiments, conducted using widely adopted simulation tools such as SUMO and Veins, demonstrate that our approach substantially enhances caching performance and overall system efficiency. Specifically, our approach achieves nearly 9% reduction in content distribution delay and over 11% improvement in cache hit rate compared to state-of-the-art methods, thereby enhancing the effectiveness of intelligent edge caching in Internet of Vehicles environments. The source code is publicly available at: https://github.com/amine9008/RL-EDGE-CACHING.
{"title":"An optimized Multi Agent Reinforcement Learning solution for edge caching in the Internet of Vehicles","authors":"Mohamed Amine Ghamri, Badis Djamaa, Mohamed Akrem Benatia, Redouane Bellahmer","doi":"10.1016/j.pmcj.2025.102081","DOIUrl":"10.1016/j.pmcj.2025.102081","url":null,"abstract":"<div><div>The Internet of Vehicles has evolved significantly with the integration of intelligent technologies, transforming vehicular networks by enhancing communication, resource management, and decision-making at the network’s edge. With the increasing complexity of vehicular environments and data demands, efficient caching mechanisms have become essential to ensure seamless service delivery and optimized resource usage. In this paper, we present LF-MARLEC, a Leader Follower Multi-Agent Reinforcement Learning solution for Edge Caching within the Internet of Vehicles. Our approach introduces a hierarchical distribution of action importance, enabling more effective decision-making at the network edge. Extensive experiments, conducted using widely adopted simulation tools such as SUMO and Veins, demonstrate that our approach substantially enhances caching performance and overall system efficiency. Specifically, our approach achieves nearly 9% reduction in content distribution delay and over 11% improvement in cache hit rate compared to state-of-the-art methods, thereby enhancing the effectiveness of intelligent edge caching in Internet of Vehicles environments. The source code is publicly available at: <span><span>https://github.com/amine9008/RL-EDGE-CACHING</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49005,"journal":{"name":"Pervasive and Mobile Computing","volume":"112 ","pages":"Article 102081"},"PeriodicalIF":3.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144364550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01Epub Date: 2025-06-04DOI: 10.1016/j.pmcj.2025.102075
Gyujeong Lim , Joon-Min Gil , Heonchang Yu
Edge computing is a new paradigm in cloud infrastructure that decentralizes computing and storage, bringing data and services closer to the users. This proximity allows users to access high quality or large sized data with lower latency. However, edge servers typically have fewer resources than cloud servers, necessitating efficient resource management. Emerging research focuses on increasing the cache hit rate of user requests to edge servers, which reduces response latency and improves efficiency. Nonetheless, if available bandwidth is not considered, it becomes challenging to maintain both speed and quality in edge environments. This paper proposes an Autonomous Bandwidth-Efficient Edge Codecast (A-BEE-C) method to enhance the effective bandwidth per device within an edge service area. Codecast, introduced in this paper, is a transmission method that encodes multiple files into a single file before sending it to users. A-BEE-C introduces a dynamic mechanism that switches between unicast and codecast modes based on real-time bandwidth assessment. Our proposed method increases the effective bandwidth per device by encoding multiple user requests into a single coded transmission when the bandwidth of the edge server is limited. Experimental results demonstrate that A-BEE-C reduces average latency per device by up to 9.89% (and up to 18.45% with Zipf pattern data) and increases effective bandwidth per user by up to 10.15% (up to 18.11% with Zipf pattern).
{"title":"A-BEE-C: Autonomous Bandwidth-Efficient Edge Codecast","authors":"Gyujeong Lim , Joon-Min Gil , Heonchang Yu","doi":"10.1016/j.pmcj.2025.102075","DOIUrl":"10.1016/j.pmcj.2025.102075","url":null,"abstract":"<div><div>Edge computing is a new paradigm in cloud infrastructure that decentralizes computing and storage, bringing data and services closer to the users. This proximity allows users to access high quality or large sized data with lower latency. However, edge servers typically have fewer resources than cloud servers, necessitating efficient resource management. Emerging research focuses on increasing the cache hit rate of user requests to edge servers, which reduces response latency and improves efficiency. Nonetheless, if available bandwidth is not considered, it becomes challenging to maintain both speed and quality in edge environments. This paper proposes an Autonomous Bandwidth-Efficient Edge Codecast (A-BEE-C) method to enhance the effective bandwidth per device within an edge service area. Codecast, introduced in this paper, is a transmission method that encodes multiple files into a single file before sending it to users. A-BEE-C introduces a dynamic mechanism that switches between unicast and codecast modes based on real-time bandwidth assessment. Our proposed method increases the effective bandwidth per device by encoding multiple user requests into a single coded transmission when the bandwidth of the edge server is limited. Experimental results demonstrate that A-BEE-C reduces average latency per device by up to 9.89% (and up to 18.45% with Zipf pattern data) and increases effective bandwidth per user by up to 10.15% (up to 18.11% with Zipf pattern).</div></div>","PeriodicalId":49005,"journal":{"name":"Pervasive and Mobile Computing","volume":"112 ","pages":"Article 102075"},"PeriodicalIF":3.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144221106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01Epub Date: 2025-07-16DOI: 10.1016/j.pmcj.2025.102092
Chao Lin, Zhanyong Mei, Linlong Mao, Zijie Mei
Wearable devices for gait information sensing provide a reliable and robust solution for identity recognition. However, in real-world applications, gait recognition systems based on these sensing devices should adapt to diverse walking activities, tackle the challenge of limited individual data, and continuously update to recognize both old and new users. In this study, we propose a framework based on hybrid prototype enhancement to address the challenge of few-shot class-incremental gait recognition in multi-activity scenarios (FC-GRMA). Firstly, hybrid prototypes are generated by introducing auxiliary activity labels, which are more generalizable than ordinary prototypes; secondly, the prototypes are adjusted by a selective prototype enhancement module, which improves the representative and discriminative abilities of the prototypes. Finally, validation on the public dataset USC-HAD and the self-built dataset CDUT-AG shows that our proposed framework performs best in solving the FC-GRMA problem. In particular, we also discuss the effect of different numbers of activities on the model performance, and the results show that our framework effectively addresses the issue of catastrophic forgetting in multi-activity scenarios. The source code is available at https://github.com/lc321/fc-grma.git.
{"title":"Enhanced hybrid prototype for few-shot class-incremental gait recognition in multi-activity scenarios using wearable sensors","authors":"Chao Lin, Zhanyong Mei, Linlong Mao, Zijie Mei","doi":"10.1016/j.pmcj.2025.102092","DOIUrl":"10.1016/j.pmcj.2025.102092","url":null,"abstract":"<div><div>Wearable devices for gait information sensing provide a reliable and robust solution for identity recognition. However, in real-world applications, gait recognition systems based on these sensing devices should adapt to diverse walking activities, tackle the challenge of limited individual data, and continuously update to recognize both old and new users. In this study, we propose a framework based on hybrid prototype enhancement to address the challenge of few-shot class-incremental gait recognition in multi-activity scenarios (<em>FC-GRMA</em>). Firstly, hybrid prototypes are generated by introducing auxiliary activity labels, which are more generalizable than ordinary prototypes; secondly, the prototypes are adjusted by a selective prototype enhancement module, which improves the representative and discriminative abilities of the prototypes. Finally, validation on the public dataset USC-HAD and the self-built dataset CDUT-AG shows that our proposed framework performs best in solving the <em>FC-GRMA</em> problem. In particular, we also discuss the effect of different numbers of activities on the model performance, and the results show that our framework effectively addresses the issue of catastrophic forgetting in multi-activity scenarios. The source code is available at <span><span>https://github.com/lc321/fc-grma.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49005,"journal":{"name":"Pervasive and Mobile Computing","volume":"112 ","pages":"Article 102092"},"PeriodicalIF":3.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144680694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The rapid growth of IoT devices has strained traditional cloud-centric architectures, revealing limitations in latency, bandwidth, and reliability. Fog computing addresses these issues by decentralizing resources closer to data sources, but task offloading and resource allocation remain challenging due to dynamic workloads, heterogeneous resources, and strict QoS requirements. This study models task offloading as a multi-objective optimization problem, considering task priority, energy efficiency, latency, and deadlines. Using a Markov Decision Process (MDP), it applies three Deep Reinforcement Learning (DRL) algorithms — DQN, DDPG, and SAC — in a multi-agent fog computing setup. Unlike prior work focused on single-agent or isolated metrics, this approach captures inter-node dependencies to improve overall resource use. Simulations show SAC achieves a 97.3% task deadline success rate and improves resource efficiency by 10.1%, highlighting its effectiveness in managing dynamic fog environments. These results advance scalable, adaptive offloading strategies for future IoT systems.
{"title":"Task offloading of IOT device in fog-enabled architecture using deep reinforcement learning approach","authors":"Abhinav Tomar, Megha Sharma, Ashwarya Agarwal, Aditya Nath Jha, Jai Jaiswal","doi":"10.1016/j.pmcj.2025.102067","DOIUrl":"10.1016/j.pmcj.2025.102067","url":null,"abstract":"<div><div>The rapid growth of IoT devices has strained traditional cloud-centric architectures, revealing limitations in latency, bandwidth, and reliability. Fog computing addresses these issues by decentralizing resources closer to data sources, but task offloading and resource allocation remain challenging due to dynamic workloads, heterogeneous resources, and strict QoS requirements. This study models task offloading as a multi-objective optimization problem, considering task priority, energy efficiency, latency, and deadlines. Using a Markov Decision Process (MDP), it applies three Deep Reinforcement Learning (DRL) algorithms — DQN, DDPG, and SAC — in a multi-agent fog computing setup. Unlike prior work focused on single-agent or isolated metrics, this approach captures inter-node dependencies to improve overall resource use. Simulations show SAC achieves a 97.3% task deadline success rate and improves resource efficiency by 10.1%, highlighting its effectiveness in managing dynamic fog environments. These results advance scalable, adaptive offloading strategies for future IoT systems.</div></div>","PeriodicalId":49005,"journal":{"name":"Pervasive and Mobile Computing","volume":"112 ","pages":"Article 102067"},"PeriodicalIF":3.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144194944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01Epub Date: 2025-06-04DOI: 10.1016/j.pmcj.2025.102076
Jun Ma , Dimitrije Panic , Roberto Yus , Georgios Bouloukakis
Providing proper thermal comfort to individual occupants is crucial to improve well-being and work efficiency. However, Heating, Ventilation, and Air Conditioning (HVAC) systems are responsible for a large portion of energy consumption and CO2 emissions in buildings. To combat the current energy crisis and climate change, innovative ways have been proposed to leverage pervasive and mobile computing systems equipped with sensors and smart devices for occupant thermal comfort satisfaction and efficient HVAC management. However, evaluating these thermal comfort provision solutions presents considerable difficulties. Conducting experiments in the real world poses challenges such as privacy concerns and the high costs of installing and maintaining sensor infrastructure. On the other hand, experiments with simulations need to accurately model real-world conditions and ensure the reliability of the simulated data.
To address these challenges, we present Co-zyBench, an innovative benchmarking tool that leverages Digital Twin (DT) technology to assess personalized thermal comfort provision systems. Our benchmark employs a simulation-based DT for the building and its HVAC system, another DT for simulating the dynamic behavior of its occupants, and a co-simulation middleware to achieve a seamless connection of the DTs. Our benchmark includes mechanisms to generate DTs based on data such as architectural models of buildings, sensor readings, and occupant thermal sensation data. It also includes reference DTs based on standard buildings, HVAC configurations, and various occupant thermal profiles. As a result of the evaluation, the benchmark generates a report based on expected energy consumption, carbon emission, thermal comfort, and occupant equity metrics. We present the evaluation results of state-of-the-art thermal comfort provisioning systems within a DT based on a real building and several reference DTs.
{"title":"A customizable benchmarking tool for evaluating personalized thermal comfort provisioning in smart spaces using Digital Twins","authors":"Jun Ma , Dimitrije Panic , Roberto Yus , Georgios Bouloukakis","doi":"10.1016/j.pmcj.2025.102076","DOIUrl":"10.1016/j.pmcj.2025.102076","url":null,"abstract":"<div><div>Providing proper thermal comfort to individual occupants is crucial to improve well-being and work efficiency. However, Heating, Ventilation, and Air Conditioning (HVAC) systems are responsible for a large portion of energy consumption and CO2 emissions in buildings. To combat the current energy crisis and climate change, innovative ways have been proposed to leverage pervasive and mobile computing systems equipped with sensors and smart devices for occupant thermal comfort satisfaction and efficient HVAC management. However, evaluating these thermal comfort provision solutions presents considerable difficulties. Conducting experiments in the real world poses challenges such as privacy concerns and the high costs of installing and maintaining sensor infrastructure. On the other hand, experiments with simulations need to accurately model real-world conditions and ensure the reliability of the simulated data.</div><div>To address these challenges, we present Co-zyBench, an innovative benchmarking tool that leverages Digital Twin (DT) technology to assess personalized thermal comfort provision systems. Our benchmark employs a simulation-based DT for the building and its HVAC system, another DT for simulating the dynamic behavior of its occupants, and a co-simulation middleware to achieve a seamless connection of the DTs. Our benchmark includes mechanisms to generate DTs based on data such as architectural models of buildings, sensor readings, and occupant thermal sensation data. It also includes reference DTs based on standard buildings, HVAC configurations, and various occupant thermal profiles. As a result of the evaluation, the benchmark generates a report based on expected energy consumption, carbon emission, thermal comfort, and occupant equity metrics. We present the evaluation results of state-of-the-art thermal comfort provisioning systems within a DT based on a real building and several reference DTs.</div></div>","PeriodicalId":49005,"journal":{"name":"Pervasive and Mobile Computing","volume":"112 ","pages":"Article 102076"},"PeriodicalIF":3.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144241928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01Epub Date: 2025-07-24DOI: 10.1016/j.pmcj.2025.102093
Aya Sakhri , Moufida Maimour , Noureddine Doghmane , Eric Rondeau , Saliha Harize
The alarming decline in animal populations, particularly birds, due to environmental degradation necessitates close monitoring of endangered migratory waterbirds in their natural habitats. This can be accomplished through the continuous capture and transmission for population estimation, habitat analysis, and various relevant studies. This paper introduces a three-tier IoMT (Internet of Multimedia Things) deployed along the Edge-Cloud continuum for automated bird monitoring systems aimed at safeguarding endangered waterbird populations. At the edge level, Wireless Multimedia Sensor Networks (WMSN) are used to periodically capture and transmit images to a central collection station (fog level). Challenges such as limited bandwidth and power in Low-Power and Lossy Networks (LLNs) are addressed through local audio identification of endangered bird calls, which activates cameras only for target birds. This significantly reduces data transmission and conserves energy. To tackle ambient noise issues in audio recognition, especially in complex environments such as wetlands, an appropriate noise reduction technique is employed to augment our automatic bird call recognition system. This paper details an energy-efficient approach addressing LLNs’ challenges and incorporates robust noise reduction techniques to improve local audio recognition. The research includes a thorough analysis of potential technical solutions prior to implementation, establishing a critical phase in the system development.
{"title":"An energy-efficient IoMT three-tier architecture for continuous monitoring of endangered bird species","authors":"Aya Sakhri , Moufida Maimour , Noureddine Doghmane , Eric Rondeau , Saliha Harize","doi":"10.1016/j.pmcj.2025.102093","DOIUrl":"10.1016/j.pmcj.2025.102093","url":null,"abstract":"<div><div>The alarming decline in animal populations, particularly birds, due to environmental degradation necessitates close monitoring of endangered migratory waterbirds in their natural habitats. This can be accomplished through the continuous capture and transmission for population estimation, habitat analysis, and various relevant studies. This paper introduces a three-tier IoMT (Internet of Multimedia Things) deployed along the Edge-Cloud continuum for automated bird monitoring systems aimed at safeguarding endangered waterbird populations. At the edge level, Wireless Multimedia Sensor Networks (WMSN) are used to periodically capture and transmit images to a central collection station (fog level). Challenges such as limited bandwidth and power in Low-Power and Lossy Networks (LLNs) are addressed through local audio identification of endangered bird calls, which activates cameras only for target birds. This significantly reduces data transmission and conserves energy. To tackle ambient noise issues in audio recognition, especially in complex environments such as wetlands, an appropriate noise reduction technique is employed to augment our automatic bird call recognition system. This paper details an energy-efficient approach addressing LLNs’ challenges and incorporates robust noise reduction techniques to improve local audio recognition. The research includes a thorough analysis of potential technical solutions prior to implementation, establishing a critical phase in the system development.</div></div>","PeriodicalId":49005,"journal":{"name":"Pervasive and Mobile Computing","volume":"112 ","pages":"Article 102093"},"PeriodicalIF":3.5,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144724153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01Epub Date: 2025-06-16DOI: 10.1016/j.pmcj.2025.102083
Elif Bozkaya-Aras
Mobile Edge Computing (MEC) is a significant technology employed in the development of the Industrial Internet of Things (IIoT) as it allows the collection and processing of high volumes of data at the network edge to support industrial processes and improve operational efficiency and productivity. However, despite significant advances in MEC capabilities, the stringent latency requirement that may occur in computation-intensive tasks may affect the freshness of status information. Therefore, there are practical challenges in scheduling the tasks associated with computational efficiency in local computation and remote computation. In this context, we propose an Age of Information (AoI)-based scheduler to determine where to execute computational tasks in order to continuously track state data updates, where the AoI metric measures the time elapsed from the generation of the computation task at the source to the latest received update at the destination. The contributions of this paper are threefold: First, we propose a digital twin-enabled AoI-based scheduler model that collects real-time data from IIoT nodes and predicts the best task assignment in terms of local computation and remote computation. The digital twin environment allows monitoring of the state changes of the real physical assets over time and optimizes the scheduling strategy. Second, we formulate the average AoI problem with the M/M/1 queueing model and propose a genetic algorithm-based scheduler to minimize AoI and task completion time to efficiently schedule the computation tasks between IIoT devices and MEC servers. Third, we compare the performance of our digital twin-enabled model with the traditional strategies and make a significant contribution to IIoT edge network management by analyzing AoI, task completion time and MEC server utilization.
{"title":"Digital twin-enabled age of information-aware scheduling for Industrial IoT edge networks","authors":"Elif Bozkaya-Aras","doi":"10.1016/j.pmcj.2025.102083","DOIUrl":"10.1016/j.pmcj.2025.102083","url":null,"abstract":"<div><div>Mobile Edge Computing (MEC) is a significant technology employed in the development of the Industrial Internet of Things (IIoT) as it allows the collection and processing of high volumes of data at the network edge to support industrial processes and improve operational efficiency and productivity. However, despite significant advances in MEC capabilities, the stringent latency requirement that may occur in computation-intensive tasks may affect the freshness of status information. Therefore, there are practical challenges in scheduling the tasks associated with computational efficiency in local computation and remote computation. In this context, we propose an Age of Information (AoI)-based scheduler to determine where to execute computational tasks in order to continuously track state data updates, where the AoI metric measures the time elapsed from the generation of the computation task at the source to the latest received update at the destination. The contributions of this paper are threefold: First, we propose a digital twin-enabled AoI-based scheduler model that collects real-time data from IIoT nodes and predicts the best task assignment in terms of local computation and remote computation. The digital twin environment allows monitoring of the state changes of the real physical assets over time and optimizes the scheduling strategy. Second, we formulate the average AoI problem with the M/M/1 queueing model and propose a genetic algorithm-based scheduler to minimize AoI and task completion time to efficiently schedule the computation tasks between IIoT devices and MEC servers. Third, we compare the performance of our digital twin-enabled model with the traditional strategies and make a significant contribution to IIoT edge network management by analyzing AoI, task completion time and MEC server utilization.</div></div>","PeriodicalId":49005,"journal":{"name":"Pervasive and Mobile Computing","volume":"112 ","pages":"Article 102083"},"PeriodicalIF":3.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144313757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-01Epub Date: 2025-05-16DOI: 10.1016/j.pmcj.2025.102063
Ling Xing , Jingjing Cui , Jianping Gao , Kaikai Deng , Honghai Wu , Huahong Ma
Federated learning (FL), as a distributed way for processing real-time vehicle data, is widely used to improve driving experience and enhance service quality in Internet of Vehicles (IoV). However, considering the data and devices heterogeneity of vehicle nodes, randomly selecting vehicles that are involved in model training would suffer from data skewness, high resource consumption, and low convergence speed. To this end, we propose Octopus, which consists of two components: i) an importance sampling-based local loss computation method is designed to request resource information for each client and apply the importance sampling technique to assess each client’s contribution to the global model’s convergence, followed by utilizing a knapsack model that treats the local loss of each client as the item value, while treating the total system training time as the knapsack capacity to accelerate the client convergence; ii) a knapsack model-based federated learning client selection method is designed to select the client with optimal local loss and maximum model uploading speed to participate in training. In each training round, these clients download and update the model within a predefined time, followed by enabling the selected clients to continue uploading the updated model parameters for assisting the server to efficiently complete the model aggregation. Experimental results show that Octopus improved the model accuracy by 2.64% 32.61% with heterogeneous data, and by 1.97% 11.74% with device heterogeneity, compared to eight state-of-the-art baselines.
{"title":"Octopus: Knapsack model-driven federated learning client selection in internet of vehicles","authors":"Ling Xing , Jingjing Cui , Jianping Gao , Kaikai Deng , Honghai Wu , Huahong Ma","doi":"10.1016/j.pmcj.2025.102063","DOIUrl":"10.1016/j.pmcj.2025.102063","url":null,"abstract":"<div><div>Federated learning (FL), as a distributed way for processing real-time vehicle data, is widely used to improve driving experience and enhance service quality in Internet of Vehicles (IoV). However, considering the data and devices heterogeneity of vehicle nodes, randomly selecting vehicles that are involved in model training would suffer from data skewness, high resource consumption, and low convergence speed. To this end, we propose <span>Octopus</span>, which consists of two components: i) an <em>importance sampling-based local loss computation</em> method is designed to request resource information for each client and apply the importance sampling technique to assess each client’s contribution to the global model’s convergence, followed by utilizing a knapsack model that treats the local loss of each client as the item value, while treating the total system training time as the knapsack capacity to accelerate the client convergence; ii) a <em>knapsack model-based federated learning client selection</em> method is designed to select the client with optimal local loss and maximum model uploading speed to participate in training. In each training round, these clients download and update the model within a predefined time, followed by enabling the selected clients to continue uploading the updated model parameters for assisting the server to efficiently complete the model aggregation. Experimental results show that <span>Octopus</span> improved the model accuracy by 2.64% <span><math><mo>∼</mo></math></span>32.61% with heterogeneous data, and by 1.97% <span><math><mo>∼</mo></math></span>11.74% with device heterogeneity, compared to eight state-of-the-art baselines.</div></div>","PeriodicalId":49005,"journal":{"name":"Pervasive and Mobile Computing","volume":"111 ","pages":"Article 102063"},"PeriodicalIF":3.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144071054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}