Du Bowen, Wang Haiquan, Li Yuxuan, Jiejie Zhao, Yanbo Ma, Huang Runhe
As an emerging learning paradigm, Federated Learning (FL) enables data owners to collaborate training a model while keeps data locally. However, classic FL methods are susceptible to model poisoning attacks and Byzantine failures. Despite several defense methods proposed to mitigate such concerns, it is challenging to balance adverse effects while allowing that each credible node contributes to the learning process. To this end, a Fair and Robust FL method is proposed for defense against model poisoning attack from malicious nodes, namely FRFL. FRFL can learn a high-quality model even if some nodes are malicious. In particular, we first classify each participant into three categories: training node, validation node, and blockchain node. Among these, blockchain nodes replace the central server in classic FL methods while enabling secure aggregation. Then, a fairness-aware role rotation method is proposed to periodically alter the sets of training and validation nodes in order to utilize the valuable information included in local datasets of credible nodes. Finally, a decentralized and adaptive aggregation mechanism cooperating with blockchain nodes is designed to detect and discard malicious nodes and produce a high-quality model. The results show the effectiveness of FRFL in enhancing model performance while defending against malicious nodes.
{"title":"Fair and Robust Federated Learning via Decentralized and Adaptive Aggregation based on Blockchain","authors":"Du Bowen, Wang Haiquan, Li Yuxuan, Jiejie Zhao, Yanbo Ma, Huang Runhe","doi":"10.1145/3673656","DOIUrl":"https://doi.org/10.1145/3673656","url":null,"abstract":"<p>As an emerging learning paradigm, Federated Learning (FL) enables data owners to collaborate training a model while keeps data locally. However, classic FL methods are susceptible to model poisoning attacks and Byzantine failures. Despite several defense methods proposed to mitigate such concerns, it is challenging to balance adverse effects while allowing that each credible node contributes to the learning process. To this end, a Fair and Robust FL method is proposed for defense against model poisoning attack from malicious nodes, namely FRFL. FRFL can learn a high-quality model even if some nodes are malicious. In particular, we first classify each participant into three categories: training node, validation node, and blockchain node. Among these, blockchain nodes replace the central server in classic FL methods while enabling secure aggregation. Then, a fairness-aware role rotation method is proposed to periodically alter the sets of training and validation nodes in order to utilize the valuable information included in local datasets of credible nodes. Finally, a decentralized and adaptive aggregation mechanism cooperating with blockchain nodes is designed to detect and discard malicious nodes and produce a high-quality model. The results show the effectiveness of FRFL in enhancing model performance while defending against malicious nodes.</p>","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"77 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141549747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jingkai Liu, Xiaoting Lyu, Li Duan, Yongzhong He, Jiqiang Liu, Hongliang Ma, Bin Wang, Chunhua Su, Wei Wang
Federated learning (FL), which holds promise for use in edge intelligence applications for smart cities, enables smart devices collaborate in training a global model by exchanging local model updates instead of sharing local training data. However, the global model can be corrupted by malicious clients conducting poisoning attacks, resulting in the failure of converging the global model, incorrect predictions on the test set, or the backdoor embedded. Although some aggregation algorithms can enhance the robustness of FL against malicious clients, our work demonstrates that existing stealthy poisoning attacks can still bypass these defense methods. In this work, we propose a robust aggregation mechanism, called Parts and All (PnA), to protect the global model of FL by filtering out malicious local model updates throughout the detection of poisoning attacks at layers of local model updates. We conduct comprehensive experiments on three representative datasets. The experimental results demonstrate that our proposed PnA is more effective than existing robust aggregation algorithms against state-of-the-art poisoning attacks. Besides, PnA has a stable performance against poisoning attacks with different poisoning settings.
{"title":"PnA: Robust Aggregation Against Poisoning Attacks to Federated Learning for Edge Intelligence","authors":"Jingkai Liu, Xiaoting Lyu, Li Duan, Yongzhong He, Jiqiang Liu, Hongliang Ma, Bin Wang, Chunhua Su, Wei Wang","doi":"10.1145/3669902","DOIUrl":"https://doi.org/10.1145/3669902","url":null,"abstract":"<p>Federated learning (FL), which holds promise for use in edge intelligence applications for smart cities, enables smart devices collaborate in training a global model by exchanging local model updates instead of sharing local training data. However, the global model can be corrupted by malicious clients conducting poisoning attacks, resulting in the failure of converging the global model, incorrect predictions on the test set, or the backdoor embedded. Although some aggregation algorithms can enhance the robustness of FL against malicious clients, our work demonstrates that existing stealthy poisoning attacks can still bypass these defense methods. In this work, we propose a robust aggregation mechanism, called <i>Parts and All</i> (<i>PnA</i>), to protect the global model of FL by filtering out malicious local model updates throughout the detection of poisoning attacks at layers of local model updates. We conduct comprehensive experiments on three representative datasets. The experimental results demonstrate that our proposed <i>PnA</i> is more effective than existing robust aggregation algorithms against state-of-the-art poisoning attacks. Besides, <i>PnA</i> has a stable performance against poisoning attacks with different poisoning settings.</p>","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"138 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141197958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate localization of unmanned aerial vehicle (UAV) is critical for navigation in GPS-denied regions, which remains a highly challenging topic in recent research. This paper describes a novel approach to multi-sensor hybrid coupled cooperative localization network (HCCNet) system that combines multiple types of sensors including camera, ultra-wideband (UWB), and inertial measurement unit (IMU) to address this challenge. The camera and IMU can automatically determine the position of UAV based on the perception of surrounding environments and their own measurement data. The UWB node and the UWB wireless sensor network (WSN) in indoor environments jointly determine the global position of UAV, and the proposed dynamic random sample consensus (D-RANSAC) algorithm can optimize UWB localization accuracy. To fully exploit UWB localization results, we provide a HCCNet system which combines the local pose estimator of visual inertial odometry (VIO) system with global constraints from UWB localization results. Experimental results show that the proposed D-RANSAC algorithm can achieve better accuracy than other UWB-based algorithms. The effectiveness of the proposed HCCNet method is verified by a mobile robot in real world and some simulation experiments in indoor environments.
{"title":"HCCNet: Hybrid Coupled Cooperative Network for Robust Indoor Localization","authors":"Li Zhang, Xu Zhou, Danyang Li, Zheng Yang","doi":"10.1145/3665645","DOIUrl":"https://doi.org/10.1145/3665645","url":null,"abstract":"<p>Accurate localization of unmanned aerial vehicle (UAV) is critical for navigation in GPS-denied regions, which remains a highly challenging topic in recent research. This paper describes a novel approach to multi-sensor hybrid coupled cooperative localization network (HCCNet) system that combines multiple types of sensors including camera, ultra-wideband (UWB), and inertial measurement unit (IMU) to address this challenge. The camera and IMU can automatically determine the position of UAV based on the perception of surrounding environments and their own measurement data. The UWB node and the UWB wireless sensor network (WSN) in indoor environments jointly determine the global position of UAV, and the proposed dynamic random sample consensus (D-RANSAC) algorithm can optimize UWB localization accuracy. To fully exploit UWB localization results, we provide a HCCNet system which combines the local pose estimator of visual inertial odometry (VIO) system with global constraints from UWB localization results. Experimental results show that the proposed D-RANSAC algorithm can achieve better accuracy than other UWB-based algorithms. The effectiveness of the proposed HCCNet method is verified by a mobile robot in real world and some simulation experiments in indoor environments.</p>","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"12 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141167566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Smart cities have drawn a lot of interest in recent years, which employ Internet of Things (IoT)-enabled sensors to gather data from various sources and help enhance the quality of residents’ life in multiple areas, e.g. public safety. Accurate crime prediction is significant for public safety promotion. However, the complicated spatial-temporal dependencies make the task challenging, due to two aspects: 1) spatial dependency of crime includes correlations with spatially adjacent regions and underlying correlations with distant regions, e.g. mobility connectivity and functional similarity; 2) there are near-repeat and long-range temporal correlations between crime occurrences across time. Most existing studies fall short in tackling with multi-view correlations, since they usually treat them equally without consideration of different weights for these correlations. In this paper, we propose a novel model for region-level crime prediction named as Heterogeneous Dynamic Multi-view Graph Neural Network (HDM-GNN). The model can represent the dynamic spatial-temporal dependencies of crime with heterogeneous urban data, and fuse various types of region-wise correlations from multiple views. Global spatial dependencies and long-range temporal dependencies can be derived by integrating the multiple GAT modules and Gated CNN modules. Extensive experiments are conducted to evaluate the effectiveness of our method using several real-world datasets. Results demonstrate that our method outperforms state-of-the-art baselines. All the code are available at https://github.com/ZJUDataIntelligence/HDM-GNN.
近年来,智慧城市备受关注,它利用支持物联网(IoT)的传感器从各种来源收集数据,帮助提高居民在公共安全等多个领域的生活质量。准确的犯罪预测对促进公共安全意义重大。然而,由于复杂的时空依赖关系,这项任务具有挑战性:1) 犯罪的空间依赖性包括与空间相邻区域的相关性以及与远距离区域的潜在相关性,如流动连接性和功能相似性;2) 不同时间段的犯罪发生之间存在近距离重复和远距离时间相关性。大多数现有研究在处理多视角相关性方面存在不足,因为它们通常将这些相关性同等对待,而没有考虑这些相关性的不同权重。在本文中,我们提出了一种用于区域级犯罪预测的新型模型,命名为异构动态多视角图神经网络(HDM-GNN)。该模型可以用异构城市数据表示犯罪的动态时空依赖关系,并融合来自多视角的各种区域相关性。通过整合多个 GAT 模块和 Gated CNN 模块,可以得出全局空间依赖关系和长程时间依赖关系。我们使用多个真实世界数据集进行了广泛的实验,以评估我们方法的有效性。结果表明,我们的方法优于最先进的基线方法。所有代码可在 https://github.com/ZJUDataIntelligence/HDM-GNN 上获取。
{"title":"HDM-GNN: A Heterogeneous Dynamic Multi-view Graph Neural Network for Crime Prediction","authors":"Binbin Zhou, Hang Zhou, Weikun Wang, Liming Chen, Jianhua Ma, Zengwei Zheng","doi":"10.1145/3665141","DOIUrl":"https://doi.org/10.1145/3665141","url":null,"abstract":"<p>Smart cities have drawn a lot of interest in recent years, which employ Internet of Things (IoT)-enabled sensors to gather data from various sources and help enhance the quality of residents’ life in multiple areas, e.g. public safety. Accurate crime prediction is significant for public safety promotion. However, the complicated spatial-temporal dependencies make the task challenging, due to two aspects: 1) spatial dependency of crime includes correlations with spatially adjacent regions and underlying correlations with distant regions, e.g. mobility connectivity and functional similarity; 2) there are near-repeat and long-range temporal correlations between crime occurrences across time. Most existing studies fall short in tackling with multi-view correlations, since they usually treat them equally without consideration of different weights for these correlations. In this paper, we propose a novel model for region-level crime prediction named as Heterogeneous Dynamic Multi-view Graph Neural Network (HDM-GNN). The model can represent the dynamic spatial-temporal dependencies of crime with heterogeneous urban data, and fuse various types of region-wise correlations from multiple views. Global spatial dependencies and long-range temporal dependencies can be derived by integrating the multiple GAT modules and Gated CNN modules. Extensive experiments are conducted to evaluate the effectiveness of our method using several real-world datasets. Results demonstrate that our method outperforms state-of-the-art baselines. All the code are available at https://github.com/ZJUDataIntelligence/HDM-GNN.</p>","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"44 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140940618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Passive human tracking using Wi-Fi has been researched broadly in the past decade. Besides straight-forward anchor point localization, velocity is another vital sign adopted by the existing approaches to infer user trajectory. However, state-of-the-art Wi-Fi velocity estimation relies on Doppler-Frequency-Shift (DFS) which suffers from the inevitable signal noise incurring unbounded velocity errors, further degrading the tracking accuracy. In this paper, we present WiVelo that explores new spatial-temporal signal correlation features observed from different antennas to achieve accurate velocity estimation. First, we use subcarrier shift distribution (SSD) extracted from channel state information (CSI) to define two correlation features for direction and speed estimation, separately. Then, we design a mesh model calculated by the antennas’ locations to enable a fine-grained velocity estimation with bounded direction error. Finally, with the continuously estimated velocity, we develop an end-to-end trajectory recovery algorithm to mitigate velocity outliers with the property of walking velocity continuity. We implement WiVelo on commodity Wi-Fi hardware and extensively evaluate its tracking accuracy in various environments. The experimental results show our median and 90-percentile tracking errors are 0.47 m and 1.06 m, which are half and a quarter of state-of-the-art. The datasets and source codes are published through Github (https://github.com/research-source/code).
{"title":"WiVelo: Fine-grained Wi-Fi Walking Velocity Estimation","authors":"Zhichao Cao, Chenning Li, Li Liu, Mi Zhang","doi":"10.1145/3664196","DOIUrl":"https://doi.org/10.1145/3664196","url":null,"abstract":"<p>Passive human tracking using Wi-Fi has been researched broadly in the past decade. Besides straight-forward anchor point localization, velocity is another vital sign adopted by the existing approaches to infer user trajectory. However, state-of-the-art Wi-Fi velocity estimation relies on Doppler-Frequency-Shift (DFS) which suffers from the inevitable signal noise incurring unbounded velocity errors, further degrading the tracking accuracy. In this paper, we present WiVelo that explores new spatial-temporal signal correlation features observed from different antennas to achieve accurate velocity estimation. First, we use subcarrier shift distribution (SSD) extracted from channel state information (CSI) to define two correlation features for direction and speed estimation, separately. Then, we design a mesh model calculated by the antennas’ locations to enable a fine-grained velocity estimation with bounded direction error. Finally, with the continuously estimated velocity, we develop an end-to-end trajectory recovery algorithm to mitigate velocity outliers with the property of walking velocity continuity. We implement WiVelo on commodity Wi-Fi hardware and extensively evaluate its tracking accuracy in various environments. The experimental results show our median and 90-percentile tracking errors are 0.47 m and 1.06 m, which are half and a quarter of state-of-the-art. The datasets and source codes are published through Github (https://github.com/research-source/code).</p>","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"66 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140942015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Breakthroughs in Wireless Energy Transfer (WET) technologies have revitalized Wireless Rechargeable Sensor Networks (WRSNs). However, how to schedule mobile chargers rationally has been quite a tricky problem. Most of the current work does not consider the variability of scenarios and how many mobile chargers should be scheduled as the most appropriate for each dispatch. At the same time, the focus of most work on the mobile charger scheduling problem has always been on reducing the number of dead nodes, and the most critical metric of network performance, packet arrival rate, is relatively neglected. In this paper, we develop a DRL-based Partial Charging (DPC) algorithm. Based on the number and urgency of charging requests, we classify charging requests into four scenarios. And for each scenario, we design a corresponding request allocation algorithm. Then, a Deep Reinforcement Learning (DRL) algorithm is employed to train a decision model using environmental information to select which request allocation algorithm is optimal for the current scenario. After the allocation of charging requests is confirmed, to improve the Quality of Service (QoS), i.e., the packet arrival rate of the entire network, a partial charging scheduling algorithm is designed to maximize the total charging duration of nodes in the ideal state while ensuring that all charging requests are completed. In addition, we analyze the traffic information of the nodes and use the Analytic Hierarchy Process (AHP) to determine the importance of the nodes to compensate for the inaccurate estimation of the node’s remaining lifetime in realistic scenarios. Simulation results show that our proposed algorithm outperforms the existing algorithms regarding the number of alive nodes and packet arrival rate.
{"title":"A DRL-based Partial Charging Algorithm for Wireless Rechargeable Sensor Networks","authors":"Jiangyuan Chen, Ammar Hawbani, Xiaohua Xu, Xingfu Wang, Liang Zhao, Zhi Liu, Saeed Alsamhi","doi":"10.1145/3661999","DOIUrl":"https://doi.org/10.1145/3661999","url":null,"abstract":"<p>Breakthroughs in Wireless Energy Transfer (WET) technologies have revitalized Wireless Rechargeable Sensor Networks (WRSNs). However, how to schedule mobile chargers rationally has been quite a tricky problem. Most of the current work does not consider the variability of scenarios and how many mobile chargers should be scheduled as the most appropriate for each dispatch. At the same time, the focus of most work on the mobile charger scheduling problem has always been on reducing the number of dead nodes, and the most critical metric of network performance, packet arrival rate, is relatively neglected. In this paper, we develop a DRL-based Partial Charging (DPC) algorithm. Based on the number and urgency of charging requests, we classify charging requests into four scenarios. And for each scenario, we design a corresponding request allocation algorithm. Then, a Deep Reinforcement Learning (DRL) algorithm is employed to train a decision model using environmental information to select which request allocation algorithm is optimal for the current scenario. After the allocation of charging requests is confirmed, to improve the Quality of Service (QoS), i.e., the packet arrival rate of the entire network, a partial charging scheduling algorithm is designed to maximize the total charging duration of nodes in the ideal state while ensuring that all charging requests are completed. In addition, we analyze the traffic information of the nodes and use the Analytic Hierarchy Process (AHP) to determine the importance of the nodes to compensate for the inaccurate estimation of the node’s remaining lifetime in realistic scenarios. Simulation results show that our proposed algorithm outperforms the existing algorithms regarding the number of alive nodes and packet arrival rate.</p>","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"46 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140940215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Agricultural irrigation is a significant contributor to freshwater consumption. However, the current irrigation systems used in the field are not efficient. They rely mainly on soil moisture sensors and the experience of growers, but do not account for future soil moisture loss. Predicting soil moisture loss is challenging because it is influenced by numerous factors, including soil texture, weather conditions, and plant characteristics. This paper proposes a solution to improve irrigation efficiency, which is called DRLIC. DRLIC is a sophisticated irrigation system that uses deep reinforcement learning (DRL) to optimize its performance. The system employs a neural network, known as the DRL control agent, which learns an optimal control policy that considers both the current soil moisture measurement and the future soil moisture loss. We introduce an irrigation reward function that enables our control agent to learn from previous experiences. However, there may be instances where the output of our DRL control agent is unsafe, such as irrigating too much or too little water. To avoid damaging the health of the plants, we implement a safety mechanism that employs a soil moisture predictor to estimate the performance of each action. If the predicted outcome is deemed unsafe, we perform a relatively conservative action instead. To demonstrate the real-world application of our approach, we develop an irrigation system that comprises sprinklers, sensing and control nodes, and a wireless network. We evaluate the performance of DRLIC by deploying it in a testbed consisting of six almond trees. During a 15-day in-field experiment, we compare the water consumption of DRLIC with a widely-used irrigation scheme. Our results indicate that DRLIC outperforms the traditional irrigation method by achieving water savings of up to 9.52%.
{"title":"Optimizing Irrigation Efficiency using Deep Reinforcement Learning in the Field","authors":"Wan Du, Xianzhong Ding","doi":"10.1145/3662182","DOIUrl":"https://doi.org/10.1145/3662182","url":null,"abstract":"<p>Agricultural irrigation is a significant contributor to freshwater consumption. However, the current irrigation systems used in the field are not efficient. They rely mainly on soil moisture sensors and the experience of growers, but do not account for future soil moisture loss. Predicting soil moisture loss is challenging because it is influenced by numerous factors, including soil texture, weather conditions, and plant characteristics. This paper proposes a solution to improve irrigation efficiency, which is called <i>DRLIC</i>. <i>DRLIC</i> is a sophisticated irrigation system that uses deep reinforcement learning (DRL) to optimize its performance. The system employs a neural network, known as the DRL control agent, which learns an optimal control policy that considers both the current soil moisture measurement and the future soil moisture loss. We introduce an irrigation reward function that enables our control agent to learn from previous experiences. However, there may be instances where the output of our DRL control agent is unsafe, such as irrigating too much or too little water. To avoid damaging the health of the plants, we implement a safety mechanism that employs a soil moisture predictor to estimate the performance of each action. If the predicted outcome is deemed unsafe, we perform a relatively conservative action instead. To demonstrate the real-world application of our approach, we develop an irrigation system that comprises sprinklers, sensing and control nodes, and a wireless network. We evaluate the performance of <i>DRLIC</i> by deploying it in a testbed consisting of six almond trees. During a 15-day in-field experiment, we compare the water consumption of <i>DRLIC</i> with a widely-used irrigation scheme. Our results indicate that <i>DRLIC</i> outperforms the traditional irrigation method by achieving water savings of up to 9.52%.</p>","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"28 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140839645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiangjie Kong, Xiaoxue Yang, Si Shen, Guojiang Shen
Vehicle edge computing (VEC) provides efficient services for vehicles by offloading tasks to edge servers. Notably, extant research mainly employs methods such as deep learning and reinforcement learning to make resource allocation decisions, without adequately accounting for the ramifications of high-speed mobility of vehicles and the dynamic nature of the Internet of Vehicles (IoV) on the decision-making process. This paper endeavours to tackle the aforementioned issue through the introduction of a novel concept, namely, a digital twin-assisted IoV. Among them, the digital twin of IoV offers training data for computational offloading and content caching decisions, which allows edge servers to directly interact with the dynamic environment while capturing its dynamic changes in real-time. Through this collaborative endeavour, edge intelligent servers can promptly respond to vehicular requests and return results. We transform the dynamic edge computing problem into a Markov decision process (MDP), and then solve it with the twin delayed deep deterministic policy gradient (TD3) algorithm. Simulation experiments demonstrate the adaptability of our proposed approach in the dynamic environment while successfully enhancing the Quality of Service, that is, decreasing total delay and energy consumption.
{"title":"Energy-Delay Joint Optimization for Task Offloading in Digital Twin-Assisted Internet of Vehicles","authors":"Xiangjie Kong, Xiaoxue Yang, Si Shen, Guojiang Shen","doi":"10.1145/3658671","DOIUrl":"https://doi.org/10.1145/3658671","url":null,"abstract":"<p>Vehicle edge computing (VEC) provides efficient services for vehicles by offloading tasks to edge servers. Notably, extant research mainly employs methods such as deep learning and reinforcement learning to make resource allocation decisions, without adequately accounting for the ramifications of high-speed mobility of vehicles and the dynamic nature of the Internet of Vehicles (IoV) on the decision-making process. This paper endeavours to tackle the aforementioned issue through the introduction of a novel concept, namely, a digital twin-assisted IoV. Among them, the digital twin of IoV offers training data for computational offloading and content caching decisions, which allows edge servers to directly interact with the dynamic environment while capturing its dynamic changes in real-time. Through this collaborative endeavour, edge intelligent servers can promptly respond to vehicular requests and return results. We transform the dynamic edge computing problem into a Markov decision process (MDP), and then solve it with the twin delayed deep deterministic policy gradient (TD3) algorithm. Simulation experiments demonstrate the adaptability of our proposed approach in the dynamic environment while successfully enhancing the Quality of Service, that is, decreasing total delay and energy consumption.</p>","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"101 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140564376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the past decades, explosive numbers of Internet of Things (IoT) devices (objects) have been connected to the Internet, which enable users to access, control, and monitor their surrounding phenomenons at anytime and anywhere. To provide seamless interactions between the cyber world and the real world, Digital twins (DTs) of objects (IoT devices) are key enablers for real time monitoring, behavior simulations and predictive decisions on objects. Compared to centralized cloud computing, mobile edge computing (MEC) has been envisioning as a promising paradigm for low latency IoT applications. Accelerating the usage of DTs in MEC networks will bring unprecedented benefits to diverse services, through the co-evolution between physical objects and their virtual DTs, and DT-assisted service provisioning has attracted increasing attention recently.
In this paper, we consider novel DT placement and migration problems in an MEC network with the mobility assumption of objects and users, by jointly considering the freshness of DT data and the service cost of users requesting for DT data. To this end, we first propose an algorithm for the DT placement problem with the aim to minimize the sum of the DT update cost of objects and the total service cost of users requesting for DT data, through efficient DT placements and resource allocation to process user requests. We then devise an approximation algorithm with a provable approximation ratio for a special case of the DT placement problem when each user requests the DT data of only one object. Meanwhile, considering the mobility of users and objects, we devise an online, two-layer scheduling algorithm for DT migrations to further reduce the total service cost of users within a given finite time horizon. We finally evaluate the performance of the proposed algorithms through experimental simulations. The simulation results show that the proposed algorithms are promising.
{"title":"Cost Minimization of Digital Twin Placements in Mobile Edge Computing","authors":"Yuncan Zhang, Weifa Liang, Wenzheng Xu, Zichuan Xu, Xiaohua Jia","doi":"10.1145/3658449","DOIUrl":"https://doi.org/10.1145/3658449","url":null,"abstract":"<p>In the past decades, explosive numbers of Internet of Things (IoT) devices (objects) have been connected to the Internet, which enable users to access, control, and monitor their surrounding phenomenons at anytime and anywhere. To provide seamless interactions between the cyber world and the real world, Digital twins (DTs) of objects (IoT devices) are key enablers for real time monitoring, behavior simulations and predictive decisions on objects. Compared to centralized cloud computing, mobile edge computing (MEC) has been envisioning as a promising paradigm for low latency IoT applications. Accelerating the usage of DTs in MEC networks will bring unprecedented benefits to diverse services, through the co-evolution between physical objects and their virtual DTs, and DT-assisted service provisioning has attracted increasing attention recently. </p><p>In this paper, we consider novel DT placement and migration problems in an MEC network with the mobility assumption of objects and users, by jointly considering the freshness of DT data and the service cost of users requesting for DT data. To this end, we first propose an algorithm for the DT placement problem with the aim to minimize the sum of the DT update cost of objects and the total service cost of users requesting for DT data, through efficient DT placements and resource allocation to process user requests. We then devise an approximation algorithm with a provable approximation ratio for a special case of the DT placement problem when each user requests the DT data of only one object. Meanwhile, considering the mobility of users and objects, we devise an online, two-layer scheduling algorithm for DT migrations to further reduce the total service cost of users within a given finite time horizon. We finally evaluate the performance of the proposed algorithms through experimental simulations. The simulation results show that the proposed algorithms are promising.</p>","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"39 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140564354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
UWB (Ultra-wideband) has been shown as a promising technology to provide accurate positioning for the Internet of Things. However, its performance significantly degrades in practice due to Non-Line-Of-Sight (NLOS) issues. Various approaches have implicitly or explicitly explored the problem. In this paper, we propose RefLoc that leverages the unique benefits of UWB to address the NLOS problem. While we find NLOS links can vary significantly in the same environment, LOS links possess similar features which can be captured by the high bandwidth of UWB. Specifically, the high-level idea of RefLoc is to first identify links among anchors with known positions and leverage those links as references for tag link identification. To achieve this, we address the practical challenges of deriving anchor link status, extracting qualified link features, and inferring tag links with anchor links. We implement RefLoc on commercial hardware and conduct extensive experiments in different environments. The evaluation results show that RefLoc achieves an average NLOS identification accuracy of 96% in various environments, improving the state-of-the-art by 10%, and reduces 80% localization error with little overhead.
{"title":"Exploiting Anchor Links for NLOS Combating in UWB Localization","authors":"Yijie Chen, Jiliang Wang, Jing Yang","doi":"10.1145/3657639","DOIUrl":"https://doi.org/10.1145/3657639","url":null,"abstract":"<p>UWB (Ultra-wideband) has been shown as a promising technology to provide accurate positioning for the Internet of Things. However, its performance significantly degrades in practice due to Non-Line-Of-Sight (NLOS) issues. Various approaches have implicitly or explicitly explored the problem. In this paper, we propose RefLoc that leverages the unique benefits of UWB to address the NLOS problem. While we find NLOS links can vary significantly in the same environment, LOS links possess similar features which can be captured by the high bandwidth of UWB. Specifically, the high-level idea of RefLoc is to first identify links among anchors with known positions and leverage those links as references for tag link identification. To achieve this, we address the practical challenges of deriving anchor link status, extracting qualified link features, and inferring tag links with anchor links. We implement RefLoc on commercial hardware and conduct extensive experiments in different environments. The evaluation results show that RefLoc achieves an average NLOS identification accuracy of 96% in various environments, improving the state-of-the-art by 10%, and reduces 80% localization error with little overhead.</p>","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"232 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140564187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}