Pub Date : 2024-09-12DOI: 10.1016/j.comnet.2024.110791
With the wide adoption of 5G technology and the rapid development of 6G technology, a variety of new applications have emerged. A multitude of compute-intensive and time-sensitive applications deployed on terminal equipment have placed increased demands on Internet delay and bandwidth. Mobile Edge Computing (MEC) can effectively mitigate the issues of long transmission times, high energy consumption, and data insecurity. Task offloading, as a key technology within MEC, has become a prominent research focus in this field. This paper presents a comprehensive review of the current research progress in MEC task offloading. Firstly, it introduces the fundamental concepts, application scenarios, and related technologies of MEC. Secondly, it categorizes offloading decisions into five aspects: reducing delay, minimizing energy consumption, balancing energy consumption and delay, enabling high-computing offloading, and addressing different application scenarios. It then critically analyzes and compares existing research efforts in these areas.
{"title":"Task offloading strategies for mobile edge computing: A survey","authors":"","doi":"10.1016/j.comnet.2024.110791","DOIUrl":"10.1016/j.comnet.2024.110791","url":null,"abstract":"<div><p>With the wide adoption of 5G technology and the rapid development of 6G technology, a variety of new applications have emerged. A multitude of compute-intensive and time-sensitive applications deployed on terminal equipment have placed increased demands on Internet delay and bandwidth. Mobile Edge Computing (MEC) can effectively mitigate the issues of long transmission times, high energy consumption, and data insecurity. Task offloading, as a key technology within MEC, has become a prominent research focus in this field. This paper presents a comprehensive review of the current research progress in MEC task offloading. Firstly, it introduces the fundamental concepts, application scenarios, and related technologies of MEC. Secondly, it categorizes offloading decisions into five aspects: reducing delay, minimizing energy consumption, balancing energy consumption and delay, enabling high-computing offloading, and addressing different application scenarios. It then critically analyzes and compares existing research efforts in these areas.</p></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142241822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-12DOI: 10.1016/j.comnet.2024.110778
Recently, with the advancement of Internet of Things (IoT) technology, IoT-enabled Smart Grid (SG) applications have gained tremendous popularity. Ensuring reliable communication in IoT-based SG applications is challenging due to the harsh channel environment often encountered in the power grid. Error Control (EC) techniques have emerged as a promising solution to enhance reliability. Nevertheless, ensuring network reliability requires a substantial amount of energy consumption. In this paper, we formulate a Mixed Integer Programming (MIP) model which considers the energy dissipation of EC techniques to maximize IoT network lifetime while ensuring the desired level of IoT network reliability. We develop meta-heuristic approaches such as Artificial Bee Colony (ABC) and Particle Swarm Optimization (PSO) to address the high computation complexity of large-scale IoT networks. Performance evaluations indicate that the EC-Node strategy, where each IoT node employs the most energy-efficient EC technique, yields a minimum of 8.9% extended lifetimes compared to the EC-Net strategies, where all IoT nodes employ the same EC method for a communication. Moreover, the PSO algorithm reduces the computational time by 77% while exhibiting a 2.69% network lifetime decrease compared to the optimal solution.
{"title":"Lifetime maximization of IoT-enabled smart grid applications using error control strategies","authors":"","doi":"10.1016/j.comnet.2024.110778","DOIUrl":"10.1016/j.comnet.2024.110778","url":null,"abstract":"<div><p>Recently, with the advancement of Internet of Things (IoT) technology, IoT-enabled Smart Grid (SG) applications have gained tremendous popularity. Ensuring reliable communication in IoT-based SG applications is challenging due to the harsh channel environment often encountered in the power grid. Error Control (EC) techniques have emerged as a promising solution to enhance reliability. Nevertheless, ensuring network reliability requires a substantial amount of energy consumption. In this paper, we formulate a Mixed Integer Programming (MIP) model which considers the energy dissipation of EC techniques to maximize IoT network lifetime while ensuring the desired level of IoT network reliability. We develop meta-heuristic approaches such as Artificial Bee Colony (ABC) and Particle Swarm Optimization (PSO) to address the high computation complexity of large-scale IoT networks. Performance evaluations indicate that the EC-Node strategy, where each IoT node employs the most energy-efficient EC technique, yields a minimum of 8.9% extended lifetimes compared to the EC-Net strategies, where all IoT nodes employ the same EC method for a communication. Moreover, the PSO algorithm reduces the computational time by 77% while exhibiting a 2.69% network lifetime decrease compared to the optimal solution.</p></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142241814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-12DOI: 10.1016/j.comnet.2024.110789
This paper investigates an interference-aware joint path planning and power allocation mechanism for a cellular-connected unmanned aerial vehicle (UAV) in a sparse suburban environment. The UAV’s goal is to fly from an initial point and reach a destination point by moving along the cells to guarantee the required quality of service (QoS). In particular, the UAV aims to maximize its uplink throughput and minimize interference to the ground user equipment (UEs) connected to neighboring cellular base stations (BSs), considering both the shortest path and limitations on flight resources. Expert knowledge is used to experience the scenario and define the desired behavior for the sake of the agent (i.e., UAV) training. To solve the problem, an apprenticeship learning method is utilized via inverse reinforcement learning (IRL) based on both Q-learning and deep reinforcement learning (DRL). The performance of this method is compared to learning from a demonstration technique called behavioral cloning (BC) using a supervised learning approach. Simulation and numerical results show that the proposed approach can achieve expert-level performance. We also demonstrate that, unlike the BC technique, the performance of our proposed approach does not degrade in unseen situations.
本文研究了在郊区稀疏环境中蜂窝连接无人飞行器(UAV)的干扰感知联合路径规划和功率分配机制。无人飞行器的目标是从初始点出发,沿小区移动到达目的地,以保证所需的服务质量(QoS)。特别是,考虑到最短路径和飞行资源的限制,无人机的目标是最大限度地提高上行链路吞吐量,并最大限度地减少对连接到邻近蜂窝基站(BS)的地面用户设备(UE)的干扰。为了对代理(即无人机)进行培训,利用专家知识来体验场景并定义所需的行为。为了解决这个问题,我们在 Q-learning 和深度强化学习(DRL)的基础上,通过反强化学习(IRL)采用了一种学徒学习方法。该方法的性能与使用监督学习方法从名为行为克隆(BC)的演示技术中学习的性能进行了比较。仿真和数值结果表明,所提出的方法可以达到专家级的性能。我们还证明,与 BC 技术不同,我们提出的方法在不可见的情况下性能不会下降。
{"title":"Joint path planning and power allocation of a cellular-connected UAV using apprenticeship learning via deep inverse reinforcement learning","authors":"","doi":"10.1016/j.comnet.2024.110789","DOIUrl":"10.1016/j.comnet.2024.110789","url":null,"abstract":"<div><p>This paper investigates an interference-aware joint path planning and power allocation mechanism for a cellular-connected unmanned aerial vehicle (UAV) in a sparse suburban environment. The UAV’s goal is to fly from an initial point and reach a destination point by moving along the cells to guarantee the required quality of service (QoS). In particular, the UAV aims to maximize its uplink throughput and minimize interference to the ground user equipment (UEs) connected to neighboring cellular base stations (BSs), considering both the shortest path and limitations on flight resources. Expert knowledge is used to experience the scenario and define the desired behavior for the sake of the agent (i.e., UAV) training. To solve the problem, an apprenticeship learning method is utilized via inverse reinforcement learning (IRL) based on both Q-learning and deep reinforcement learning (DRL). The performance of this method is compared to learning from a demonstration technique called behavioral cloning (BC) using a supervised learning approach. Simulation and numerical results show that the proposed approach can achieve expert-level performance. We also demonstrate that, unlike the BC technique, the performance of our proposed approach does not degrade in unseen situations.</p></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1389128624006212/pdfft?md5=dc7b3d1acee33e2f5feab69fccae53be&pid=1-s2.0-S1389128624006212-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142232950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-12DOI: 10.1016/j.comnet.2024.110800
The Terahertz (THz) band (0.1–10 THz) is projected to enable broadband wireless communications of the future, and many envision deep learning as a solution to improve the performance of THz communication systems and networks. However, there are few available datasets of true THz signals that could enable testing and training of deep learning algorithms for the research community. In this paper, we provide an extensive dataset of 120,000 data frames for the research community. All signals were transmitted at 165 GHz but with varying bandwidths (5 GHz, 10 GHz, and 20 GHz), modulations (4PSK, 8PSK, 16QAM, and 64QAM), and transmit amplitudes (75 mV and 600 mV), resulting in twenty-four distinct bandwidth-modulation-power combinations each with 5,000 unique captures. The signals were captured after down conversion at an intermediate frequency of 10 GHz. This dataset enables the research community to experimentally explore solutions relating to ultrabroadband deep and machine learning applications.
{"title":"Data signals for deep learning applications in Terahertz communications","authors":"","doi":"10.1016/j.comnet.2024.110800","DOIUrl":"10.1016/j.comnet.2024.110800","url":null,"abstract":"<div><p>The Terahertz (THz) band (0.1–10 THz) is projected to enable broadband wireless communications of the future, and many envision deep learning as a solution to improve the performance of THz communication systems and networks. However, there are few available datasets of true THz signals that could enable testing and training of deep learning algorithms for the research community. In this paper, we provide an extensive dataset of 120,000 data frames for the research community. All signals were transmitted at 165 GHz but with varying bandwidths (5 GHz, 10 GHz, and 20 GHz), modulations (4PSK, 8PSK, 16QAM, and 64QAM), and transmit amplitudes (75 mV and 600 mV), resulting in twenty-four distinct bandwidth-modulation-power combinations each with 5,000 unique captures. The signals were captured after down conversion at an intermediate frequency of 10 GHz. This dataset enables the research community to experimentally explore solutions relating to ultrabroadband deep and machine learning applications.</p></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1389128624006327/pdfft?md5=c4870e9a435477344bfb00ccf315d922&pid=1-s2.0-S1389128624006327-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142232951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-12DOI: 10.1016/j.comnet.2024.110799
The Android operating system has long been vulnerable to malicious software. Existing malware detection methods often fail to identify ever-evolving malware and are slow in detection. To address this, we propose a new model for rapid Android malware detection, which constructs various Android entities and relationships into a heterogeneous graph. Firstly, to address the semantic fusion problem in high-order heterogeneous graphs that arises with the increase in the depth of the heterogeneous graph model, we introduce adaptive weights during node aggregation to absorb the local semantics of nodes. This allows more attention to be paid to the feature information of the node itself during the semantic aggregation stage, thereby avoiding semantic confusion. Secondly, to mitigate the high time costs associated with detecting unknown applications, we employ an incremental similarity search model. This model quickly measures the similarity between unknown applications and those within the sample, aggregating the weights of nodes based on similarity scores and semantic attention coefficients, thereby enabling rapid detection. Lastly, considering the high time and space complexity of calculating node similarity scores on large graphs, we design a NeuSim model based on an encoder–decoder structure. The encoder module embeds each path instance as a vector, while the decoder converts the vector into a scalar similarity score, significantly reducing the complexity of the calculation. Experiments demonstrate that this model can not only rapidly detect malware but also capture high-level semantic relationships of application software in complex malware networks by hierarchically aggregating information from neighbors and meta-paths of different orders. Moreover, this model achieved an AUC of 0.9356 and an F1 score of 0.9355, surpassing existing malware detection algorithms. Particularly in the detection of unknown application software, the NeuSim model can double the detection speed, with an average detection time of 105 ms.
{"title":"A fast malware detection model based on heterogeneous graph similarity search","authors":"","doi":"10.1016/j.comnet.2024.110799","DOIUrl":"10.1016/j.comnet.2024.110799","url":null,"abstract":"<div><p>The Android operating system has long been vulnerable to malicious software. Existing malware detection methods often fail to identify ever-evolving malware and are slow in detection. To address this, we propose a new model for rapid Android malware detection, which constructs various Android entities and relationships into a heterogeneous graph. Firstly, to address the semantic fusion problem in high-order heterogeneous graphs that arises with the increase in the depth of the heterogeneous graph model, we introduce adaptive weights during node aggregation to absorb the local semantics of nodes. This allows more attention to be paid to the feature information of the node itself during the semantic aggregation stage, thereby avoiding semantic confusion. Secondly, to mitigate the high time costs associated with detecting unknown applications, we employ an incremental similarity search model. This model quickly measures the similarity between unknown applications and those within the sample, aggregating the weights of nodes based on similarity scores and semantic attention coefficients, thereby enabling rapid detection. Lastly, considering the high time and space complexity of calculating node similarity scores on large graphs, we design a <em>NeuSim</em> model based on an encoder–decoder structure. The encoder module embeds each path instance as a vector, while the decoder converts the vector into a scalar similarity score, significantly reducing the complexity of the calculation. Experiments demonstrate that this model can not only rapidly detect malware but also capture high-level semantic relationships of application software in complex malware networks by hierarchically aggregating information from neighbors and meta-paths of different orders. Moreover, this model achieved an AUC of 0.9356 and an F1 score of 0.9355, surpassing existing malware detection algorithms. Particularly in the detection of unknown application software, the <em>NeuSim</em> model can double the detection speed, with an average detection time of 105 ms.</p></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142241820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-11DOI: 10.1016/j.comnet.2024.110796
In the context of mobile edge computing (MEC), the delay-sensitive tasks can achieve real-time data processing and analysis by offloading to the MEC servers. The objective is maximizing social welfare in an auction-based model. However, the distances between mobile devices and access points lead to differences in energy consumption. Unfortunately, existing works have not considered both maximizing social welfare and minimizing energy consumption. Motivated by this, we address the problem of joint resource allocation and task offloading in MEC, with heterogeneous MEC servers providing multiple types of resources for mobile devices (MDs) to perform tasks remotely. We split the problem into two sub-problems: winner determination and offloading decision. The first sub-problem determines winners granted the ability to offload tasks to maximize social welfare. The second sub-problem determines how to offload tasks among the MEC servers to minimize energy consumption. In the winner determination problem, we propose a truthful algorithm that drives the system into equilibrium. We then show the approximate ratios for single and multiple MEC servers. In the offloading decision problem, we propose an approximation algorithm. We then show it is a polynomial-time approximation scheme for a single MEC server. Experiment results show that our proposed mechanism finds high-quality solutions in changing mobile environments.
{"title":"Truthful mechanism for joint resource allocation and task offloading in mobile edge computing","authors":"","doi":"10.1016/j.comnet.2024.110796","DOIUrl":"10.1016/j.comnet.2024.110796","url":null,"abstract":"<div><p>In the context of mobile edge computing (MEC), the delay-sensitive tasks can achieve real-time data processing and analysis by offloading to the MEC servers. The objective is maximizing social welfare in an auction-based model. However, the distances between mobile devices and access points lead to differences in energy consumption. Unfortunately, existing works have not considered both maximizing social welfare and minimizing energy consumption. Motivated by this, we address the problem of joint resource allocation and task offloading in MEC, with heterogeneous MEC servers providing multiple types of resources for mobile devices (MDs) to perform tasks remotely. We split the problem into two sub-problems: winner determination and offloading decision. The first sub-problem determines winners granted the ability to offload tasks to maximize social welfare. The second sub-problem determines how to offload tasks among the MEC servers to minimize energy consumption. In the winner determination problem, we propose a truthful algorithm that drives the system into equilibrium. We then show the approximate ratios for single and multiple MEC servers. In the offloading decision problem, we propose an approximation algorithm. We then show it is a polynomial-time approximation scheme for a single MEC server. Experiment results show that our proposed mechanism finds high-quality solutions in changing mobile environments.</p></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142228492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-11DOI: 10.1016/j.comnet.2024.110798
Smart cities rely heavily on surveillance cameras for urban management and security. However, the extensive use of these cameras also raises significant concerns regarding data privacy. Unauthorized access to facial data captured by these cameras and the potential for misuse of this data poses serious threats to individuals’ privacy. Current privacy preservation solutions often compromise data usability with noise application-based approaches and vulnerable centralized data handling settings. To address these privacy challenges, we propose a novel approach that combines Adversarial Machine Learning (AML) with Federated Learning (FL). Our approach involves the use of a noise generator that perturbs surveillance data right from the source before they leave the surveillance cameras. By exclusively training the Federated Learning model on these perturbed samples, we ensure that sensitive biometric features are not shared with centralized servers. Instead, such data remains on local devices (e.g., cameras), thereby ensuring that data privacy is maintained. We performed a thorough real-world evaluation of the proposed method and achieved an accuracy of around 99.95% in standard machine learning settings. In distributed settings, we achieved an accuracy of around 96.24% using federated learning, demonstrating the practicality and effectiveness of the proposed solution.1
{"title":"An Adversarial Machine Learning Based Approach for Privacy Preserving Face Recognition in Distributed Smart City Surveillance","authors":"","doi":"10.1016/j.comnet.2024.110798","DOIUrl":"10.1016/j.comnet.2024.110798","url":null,"abstract":"<div><p>Smart cities rely heavily on surveillance cameras for urban management and security. However, the extensive use of these cameras also raises significant concerns regarding data privacy. Unauthorized access to facial data captured by these cameras and the potential for misuse of this data poses serious threats to individuals’ privacy. Current privacy preservation solutions often compromise data usability with noise application-based approaches and vulnerable centralized data handling settings. To address these privacy challenges, we propose a novel approach that combines Adversarial Machine Learning (AML) with Federated Learning (FL). Our approach involves the use of a noise generator that perturbs surveillance data right from the source before they leave the surveillance cameras. By exclusively training the Federated Learning model on these perturbed samples, we ensure that sensitive biometric features are not shared with centralized servers. Instead, such data remains on local devices (e.g., cameras), thereby ensuring that data privacy is maintained. We performed a thorough real-world evaluation of the proposed method and achieved an accuracy of around 99.95% in standard machine learning settings. In distributed settings, we achieved an accuracy of around 96.24% using federated learning, demonstrating the practicality and effectiveness of the proposed solution.<span><span><sup>1</sup></span></span></p></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1389128624006303/pdfft?md5=da5fe96757f1e618798967bd74657413&pid=1-s2.0-S1389128624006303-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142241836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-10DOI: 10.1016/j.comnet.2024.110769
Limited energy and reliable data transmission are two key issues in Wireless body area networks (WBANs). The utilization of energy harvesting technology has alleviated the energy problem in WBANs, making continuous operation possible. However, Energy Harvesting WBANs (EH-WBANs) face new challenges. How to design efficient data transmission mechanisms taking into account the unstable energy harvesting conditions and dynamic network topology has become crucial. The efficiency of data transmission mainly depends on the network layer and media access control (MAC) layer. Therefore, this paper surveys the routing and MAC protocols proposed for EH-WBANs. There are some surveys on routing and MAC protocols for traditional battery-powered WBANs. However, these mechanisms cannot be directly applied to EH-WBANs due to the randomness and time-varying nature of the energy obtained by energy harvesting, which differs from the energy characteristics of nodes powered solely by batteries. In addition, due to the dynamic network topology and heterogeneous nodes in WBANs, the research results on routing and MAC protocols for Energy Harvesting Wireless Sensor Networks (EH-WSNs) cannot be directly applied to EH-WBANs. Thus, unlike previous surveys, this paper focuses on protocols specifically designed for EH-WBANs. It introduces and analyzes these protocols, summarizes the comprehensive performance metrics and efficient measures for data transmission mechanisms in EH-WBANs, and conducts a comprehensive performance analysis on the protocols proposed for EH-WBANs based on these metrics. This paper intends to provide assistance in addressing the energy and reliable data transmission issues in WBANs, thereby enhancing the applicability of EH-WBANs.
有限的能量和可靠的数据传输是无线体域网(WBAN)的两个关键问题。能量收集技术的应用缓解了无线体域网的能量问题,使其能够连续运行。然而,能量收集无线局域网(EH-WBAN)面临着新的挑战。如何在考虑不稳定的能量收集条件和动态网络拓扑的情况下设计高效的数据传输机制变得至关重要。数据传输的效率主要取决于网络层和媒体访问控制(MAC)层。因此,本文研究了为 EH-WBAN 提出的路由和 MAC 协议。目前已有一些针对传统电池供电无线局域网的路由和 MAC 协议的研究。但是,这些机制不能直接应用于 EH-WBAN,因为通过能量收集获得的能量具有随机性和时变性,与仅由电池供电的节点的能量特性不同。此外,由于无线局域网中的动态网络拓扑和异构节点,针对能量收集无线传感器网络(EH-WSN)的路由和 MAC 协议的研究成果无法直接应用于 EH-WBAN。因此,与以往的研究不同,本文重点关注专为 EH-WBAN 设计的协议。本文介绍并分析了这些协议,总结了 EH-WBAN 中数据传输机制的综合性能指标和高效措施,并根据这些指标对为 EH-WBAN 提出的协议进行了综合性能分析。本文旨在为解决 WBAN 中的能量和可靠数据传输问题提供帮助,从而提高 EH-WBAN 的适用性。
{"title":"Efficient data transmission mechanisms in energy harvesting wireless body area networks: A survey","authors":"","doi":"10.1016/j.comnet.2024.110769","DOIUrl":"10.1016/j.comnet.2024.110769","url":null,"abstract":"<div><p>Limited energy and reliable data transmission are two key issues in Wireless body area networks (WBANs). The utilization of energy harvesting technology has alleviated the energy problem in WBANs, making continuous operation possible. However, Energy Harvesting WBANs (EH-WBANs) face new challenges. How to design efficient data transmission mechanisms taking into account the unstable energy harvesting conditions and dynamic network topology has become crucial. The efficiency of data transmission mainly depends on the network layer and media access control (MAC) layer. Therefore, this paper surveys the routing and MAC protocols proposed for EH-WBANs. There are some surveys on routing and MAC protocols for traditional battery-powered WBANs. However, these mechanisms cannot be directly applied to EH-WBANs due to the randomness and time-varying nature of the energy obtained by energy harvesting, which differs from the energy characteristics of nodes powered solely by batteries. In addition, due to the dynamic network topology and heterogeneous nodes in WBANs, the research results on routing and MAC protocols for Energy Harvesting Wireless Sensor Networks (EH-WSNs) cannot be directly applied to EH-WBANs. Thus, unlike previous surveys, this paper focuses on protocols specifically designed for EH-WBANs. It introduces and analyzes these protocols, summarizes the comprehensive performance metrics and efficient measures for data transmission mechanisms in EH-WBANs, and conducts a comprehensive performance analysis on the protocols proposed for EH-WBANs based on these metrics. This paper intends to provide assistance in addressing the energy and reliable data transmission issues in WBANs, thereby enhancing the applicability of EH-WBANs.</p></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1389128624006017/pdfft?md5=90e00118fa2bd76046fbb62975e2c484&pid=1-s2.0-S1389128624006017-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142163177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-07DOI: 10.1016/j.comnet.2024.110776
Large-scale renewable distributed energy sources (DERs) penetrating into smart grids (SGs) is an inevitable trend. Such high-DER-penetrated SGs entail heavy reliance on information and communication technologies and increasing impact of social behaviors on system operation and management. In this sense, the SGs become cyber-physical-social systems. However, the deeply coupling of cyber networks, physical grids, and societies leads SGs more complex and openness, and therefore a higher possibility of facing to various threats, especially advanced persistent threats (APTs) that disrupt system operations at a large scale. To better study the threats, current APTs detection work and challenges of the SGs, we first analyze the key features of high-DER-penetrated SGs, and the vulnerabilities of devices, networks, and applications in the SGs introduced by system design, limitation of deployed security measures, and social behaviors. On this basis, we analyze APTs faced by the SGs and deem that the APTs are in the form of cyber-physical-social cooperated and multi-stage APTs. The possible attacking methods for each stage of the APTs, typically stealthy attacks at the early stages and coordinated attacks at the action stage, are also summarized. Thereafter, a review of current work on security architectures for APT detection and intelligent intrusion detection methods is provided. Finally, we discuss the key challenges, research needs, and potential solutions of future work for the SGs against the APTs from the aspects of threat modeling, threat detection, threat hunting, and implementation technology.
{"title":"Detecting the cyber-physical-social cooperated APTs in high-DER-penetrated smart grids: Threats, current work and challenges","authors":"","doi":"10.1016/j.comnet.2024.110776","DOIUrl":"10.1016/j.comnet.2024.110776","url":null,"abstract":"<div><p>Large-scale renewable distributed energy sources (DERs) penetrating into smart grids (SGs) is an inevitable trend. Such high-DER-penetrated SGs entail heavy reliance on information and communication technologies and increasing impact of social behaviors on system operation and management. In this sense, the SGs become cyber-physical-social systems. However, the deeply coupling of cyber networks, physical grids, and societies leads SGs more complex and openness, and therefore a higher possibility of facing to various threats, especially advanced persistent threats (APTs) that disrupt system operations at a large scale. To better study the threats, current APTs detection work and challenges of the SGs, we first analyze the key features of high-DER-penetrated SGs, and the vulnerabilities of devices, networks, and applications in the SGs introduced by system design, limitation of deployed security measures, and social behaviors. On this basis, we analyze APTs faced by the SGs and deem that the APTs are in the form of cyber-physical-social cooperated and multi-stage APTs. The possible attacking methods for each stage of the APTs, typically stealthy attacks at the early stages and coordinated attacks at the action stage, are also summarized. Thereafter, a review of current work on security architectures for APT detection and intelligent intrusion detection methods is provided. Finally, we discuss the key challenges, research needs, and potential solutions of future work for the SGs against the APTs from the aspects of threat modeling, threat detection, threat hunting, and implementation technology.</p></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142241821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-06DOI: 10.1016/j.comnet.2024.110775
GPS is an integral part of billions of devices that serve a wide range of applications. This reliance upon GPS renders the users vulnerable to GPS spoofing attacks, especially when in need of precise or real-time location information. To protect commodity devices, we first propose a crowdsourcing-based method for detecting GPS spoofing. In this method, called method I, we leverage the orientation diversity of different users to expose spoofing attacks and, in many cases, the location of the attacker. In all scenarios, our method not only recovers the correct location but also significantly improves the location accuracy. This is an important incentive that can drive the adoption of our approach along with the use of privacy-preserving location sharing. Additionally, we leverage the users’ distances produced by GPS and Bluetooth measurements to detect discrepancies and account for errors, called Method II. Method II is robust even in the presence of multiple coordinate adversaries. The experimental results based on our prototype implementation and large-scale simulations demonstrate a detection rate as high as 98.72 % and latency of 62 ms with average localization error of 2.43 m.
全球定位系统是服务于各种应用的数十亿设备不可或缺的一部分。这种对 GPS 的依赖使用户容易受到 GPS 欺骗攻击,尤其是在需要精确或实时位置信息时。为了保护商品设备,我们首先提出了一种基于众包的 GPS 欺骗检测方法。在这个被称为方法 I 的方法中,我们利用不同用户的定位多样性来揭露欺骗攻击,并在许多情况下揭露攻击者的位置。在所有情况下,我们的方法不仅能恢复正确的位置,还能显著提高定位精度。这是一个重要的激励因素,可以推动我们的方法与保护隐私的位置共享一起得到采用。此外,我们还利用全球定位系统和蓝牙测量产生的用户距离来检测差异并考虑误差,这被称为方法 II。方法 II 即使在存在多个坐标对手的情况下也很稳健。基于我们的原型实施和大规模模拟的实验结果表明,检测率高达 98.72%,延迟时间为 62 毫秒,平均定位误差为 2.43 米。
{"title":"All in one: Improving GPS accuracy and security via crowdsourcing","authors":"","doi":"10.1016/j.comnet.2024.110775","DOIUrl":"10.1016/j.comnet.2024.110775","url":null,"abstract":"<div><p>GPS is an integral part of billions of devices that serve a wide range of applications. This reliance upon GPS renders the users vulnerable to GPS spoofing attacks, especially when in need of precise or real-time location information. To protect commodity devices, we first propose a crowdsourcing-based method for detecting GPS spoofing. In this method, called method I, we leverage the orientation diversity of different users to expose spoofing attacks and, in many cases, the location of the attacker. In all scenarios, our method not only recovers the correct location but also significantly improves the location accuracy. This is an important incentive that can drive the adoption of our approach along with the use of privacy-preserving location sharing. Additionally, we leverage the users’ distances produced by GPS and Bluetooth measurements to detect discrepancies and account for errors, called Method II. Method II is robust even in the presence of multiple coordinate adversaries. The experimental results based on our prototype implementation and large-scale simulations demonstrate a detection rate as high as 98.72<!--> <!-->% and latency of 62<!--> <!-->ms with average localization error of 2.43<!--> <!-->m.</p></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142241819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}