Zhengjie Sun, Hui Yang, Chao Li, Qiuyan Yao, Yun Teng, Jie Zhang, Sheng Liu, Yunbo Li, Athanasios V. Vasilakos
In recent years, the number of devices and terminals connected to the smart city has increased significantly. Edge networks face a greater variety of connected objects and massive services. Considering that a large number of services have different QoS requirements, it has always been a huge challenge for smart city to optimally allocate limited computing resources to all services to obtain satisfactory performance. In particular, delay is intolerable for services in certain applications, such as medical, industrial applications, etc, that such applications require the high priority. Therefore, through flexibly dynamic scheduling, it is crucial to schedule services to the optimal node to ensure user experience. In this paper, we propose a resource allocation scheme for hierarchical edge computing network in smart city based on attention mechanism, for extracting a small number of features that can represent services from a large amount of information collected from edge nodes. The attention mechanism is used to quickly determine the priority of the services. Based on this, task deployment and resource allocation for different task priorities are developed to ensure the quality of service in smart cities by introducing Q-learning. Simulation results show that the proposed scheme can effectively improve the edge network resource utilization, reduce the average delay of task processing, and effectively guarantee the quality of service.
{"title":"A Resource Allocation Scheme for Edge Computing Network in Smart City Based on Attention Mechanism","authors":"Zhengjie Sun, Hui Yang, Chao Li, Qiuyan Yao, Yun Teng, Jie Zhang, Sheng Liu, Yunbo Li, Athanasios V. Vasilakos","doi":"10.1145/3650031","DOIUrl":"https://doi.org/10.1145/3650031","url":null,"abstract":"<p>In recent years, the number of devices and terminals connected to the smart city has increased significantly. Edge networks face a greater variety of connected objects and massive services. Considering that a large number of services have different QoS requirements, it has always been a huge challenge for smart city to optimally allocate limited computing resources to all services to obtain satisfactory performance. In particular, delay is intolerable for services in certain applications, such as medical, industrial applications, etc, that such applications require the high priority. Therefore, through flexibly dynamic scheduling, it is crucial to schedule services to the optimal node to ensure user experience. In this paper, we propose a resource allocation scheme for hierarchical edge computing network in smart city based on attention mechanism, for extracting a small number of features that can represent services from a large amount of information collected from edge nodes. The attention mechanism is used to quickly determine the priority of the services. Based on this, task deployment and resource allocation for different task priorities are developed to ensure the quality of service in smart cities by introducing Q-learning. Simulation results show that the proposed scheme can effectively improve the edge network resource utilization, reduce the average delay of task processing, and effectively guarantee the quality of service.</p>","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"16 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140098025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate localization is one of the basic requirements for smart cities and smart factories. In wireless cellular network localization, the straight-line propagation of electromagnetic waves between base stations and users is called line-of-sight (LOS) wireless propagation. In some cases, electromagnetic wave signals cannot propagate in a straight line due to obstruction by buildings or trees, and these scenarios are usually called non-LOS (NLOS) wireless propagation. Traditional localization algorithms such as TDOA, AOA, etc., are based on LOS channels, which are no longer applicable in environments where NLOS propagation is dominant, and in most scenarios, the number of base stations with LOS channels containing users is often small, resulting in traditional localization algorithms being unable to satisfy the accuracy demand of high-precision localization. In addition, some nonideal factors may be included in the actual system, all of which can lead to localization accuracy degradation. Therefore, the approach developed in this paper uses knowledge graph and graph neural network (GNN) technology to model communication data as knowledge graphs, and it adopts the knowledge graph inference technique based on a heterogeneous graph attention mechanism to infer unknown data representations in complex scenarios based on the known data and the relationships between the data to achieve high-precision localization in scenarios with LOS/NLOS channel coexistence. We experimentally demonstrate a spatial 2D localization accuracy level of approximately 10 meters on multiple datasets and find that our proposed algorithm has higher accuracy and stronger robustness than the state-of-the-art algorithms.
精确定位是智能城市和智能工厂的基本要求之一。在无线蜂窝网络定位中,电磁波在基站和用户之间的直线传播称为视距(LOS)无线传播。在某些情况下,由于建筑物或树木的阻挡,电磁波信号无法直线传播,这些情况通常被称为非视距(NLOS)无线传播。传统的定位算法,如 TDOA、AOA 等,都是基于 LOS 信道的,在非 LOS 传播占主导地位的环境中已不再适用,而且在大多数场景中,具有 LOS 信道的基站包含用户的数量往往很少,导致传统定位算法无法满足高精度定位的精度需求。此外,实际系统中还可能包含一些非理想因素,这些都会导致定位精度下降。因此,本文开发的方法利用知识图谱和图神经网络(GNN)技术将通信数据建模为知识图谱,并采用基于异构图关注机制的知识图谱推理技术,根据已知数据和数据之间的关系推断复杂场景中的未知数据表示,从而在 LOS/NLOS 信道共存的场景中实现高精度定位。我们在多个数据集上实验证明了约 10 米的空间二维定位精度水平,并发现我们提出的算法比最先进的算法具有更高的精度和更强的鲁棒性。
{"title":"Accurate Localization in LOS/NLOS Channel Coexistence Scenarios Based on Heterogeneous Knowledge Graph Inference","authors":"Bojun Zhang, Xiulong Liu, Xin Xie, Xinyu Tong, Yungang Jia, Tuo Shi, Wenyu Qu","doi":"10.1145/3651618","DOIUrl":"https://doi.org/10.1145/3651618","url":null,"abstract":"<p>Accurate localization is one of the basic requirements for smart cities and smart factories. In wireless cellular network localization, the straight-line propagation of electromagnetic waves between base stations and users is called line-of-sight (LOS) wireless propagation. In some cases, electromagnetic wave signals cannot propagate in a straight line due to obstruction by buildings or trees, and these scenarios are usually called non-LOS (NLOS) wireless propagation. Traditional localization algorithms such as TDOA, AOA, <i>etc.</i>, are based on LOS channels, which are no longer applicable in environments where NLOS propagation is dominant, and in most scenarios, the number of base stations with LOS channels containing users is often small, resulting in traditional localization algorithms being unable to satisfy the accuracy demand of high-precision localization. In addition, some nonideal factors may be included in the actual system, all of which can lead to localization accuracy degradation. Therefore, the approach developed in this paper uses knowledge graph and graph neural network (GNN) technology to model communication data as knowledge graphs, and it adopts the knowledge graph inference technique based on a heterogeneous graph attention mechanism to infer unknown data representations in complex scenarios based on the known data and the relationships between the data to achieve high-precision localization in scenarios with LOS/NLOS channel coexistence. We experimentally demonstrate a spatial 2D localization accuracy level of approximately 10 meters on multiple datasets and find that our proposed algorithm has higher accuracy and stronger robustness than the state-of-the-art algorithms.</p>","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"33 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140070039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simeon Babatunde, Arwa Alsubhi, Josiah Hester, Jacob Sorber
Communication presents a critical challenge for emerging intermittently powered batteryless sensors. Batteryless devices that operate entirely on harvested energy often experience frequent, unpredictable power outages and have trouble keeping time accurately. Consequently, effective communication using today’s low-power wireless network standards and protocols becomes difficult, particularly because existing standards are usually designed to support reliably powered devices with predictable node availability and accurate timekeeping capabilities for connection and congestion management.
In this paper, we present Greentooth, a robust and energy-efficient wireless communication protocol for intermittently-powered sensor networks. It enables reliable communication between a receiver and multiple batteryless sensors using TDMA-style scheduling and low-power wake-up radios for synchronization. Greentooth employs lightweight and energy-efficient connections that are resilient to transient power outages, while significantly improving network reliability, throughput, and energy efficiency of both the battery-free sensor nodes and the receiver—which could be untethered and energy-constrained. We evaluate Greentooth using a custom-built batteryless sensor prototype on synthetic and real-world energy traces recorded from different locations in a garden across different times of the day. Results show that Greentooth achieves 73% and 283% more throughput compared to AWD MAC and RI-CPT-WUR respectively under intermittent ambient solar energy, and over 2x longer receiver lifetime.
{"title":"Greentooth: Robust and Energy Efficient Wireless Networking for Batteryless Devices","authors":"Simeon Babatunde, Arwa Alsubhi, Josiah Hester, Jacob Sorber","doi":"10.1145/3649221","DOIUrl":"https://doi.org/10.1145/3649221","url":null,"abstract":"<p>Communication presents a critical challenge for emerging intermittently powered batteryless sensors. Batteryless devices that operate entirely on harvested energy often experience frequent, unpredictable power outages and have trouble keeping time accurately. Consequently, effective communication using today’s low-power wireless network standards and protocols becomes difficult, particularly because existing standards are usually designed to support reliably powered devices with predictable node availability and accurate timekeeping capabilities for connection and congestion management. </p><p>In this paper, we present Greentooth, a robust and energy-efficient wireless communication protocol for intermittently-powered sensor networks. It enables reliable communication between a receiver and multiple batteryless sensors using TDMA-style scheduling and low-power wake-up radios for synchronization. Greentooth employs lightweight and energy-efficient connections that are resilient to transient power outages, while significantly improving network reliability, throughput, and energy efficiency of both the battery-free sensor nodes and the receiver—which could be untethered and energy-constrained. We evaluate Greentooth using a custom-built batteryless sensor prototype on synthetic and real-world energy traces recorded from different locations in a garden across different times of the day. Results show that Greentooth achieves 73% and 283% more throughput compared to AWD MAC and RI-CPT-WUR respectively under intermittent ambient solar energy, and over 2x longer receiver lifetime.</p>","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"82 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140003568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Predicting future traffic conditions from urban sensor data is crucial for smart city applications. Recent traffic forecasting methods are derived from Spatio-Temporal Graph Convolution Networks (STGCNs). Despite their remarkable achievements, these spatio-temporal models have mainly been evaluated on small-scale datasets. In light of the rapid growth of the Internet of Things and urbanization, cities are witnessing an increased deployment of sensors, resulting in the collection of extensive sensor data to provide more accurate insights into citywide traffic dynamics. Spatio-temporal graph modeling on large-scale traffic data is challenging due to the memory constraint of the computing device. For traffic forecasting, subgraph sampling from road networks onto multiple devices is feasible. Many GCN sampling methods have been proposed recently. However, combining these with STGCNs degrades performance. This is primarily due to prediction biases introduced by each sampled subgraph, which analyze traffic states from a regional perspective.
Addressing these challenges, we introduce a parallel STGCN framework called PaSTG. PaSTG divides the road network into regions, each processed by an individual STGCN in a device. To mitigate regional biases, Aggregation Blocks in PaSTG merge spatial-temporal features from each STBlock. This collaboration enhances traffic forecasting. Furthermore, PaSTG implements pipeline parallelism and employs a graph partition algorithm for optimized pipeline efficiency. We evaluate PaSTG on various STGCNs using three traffic datasets on multiple GPUs. Results demonstrate that our parallel approach applies widely to diverse STGCN models, surpassing existing GCN samplers by up to 57.4% in prediction accuracy. Additionally, the parallel framework achieves speedups of up to 2.87x and 4.70x in training and inference compared to GCN samplers.
{"title":"PaSTG: A Parallel Spatio-Temporal GCN Framework for Traffic Forecasting in Smart City","authors":"Xianhao He, Yikun Hu, Qing Liao, Hantao Xiong, Wangdong Yang, Kenli Li","doi":"10.1145/3649467","DOIUrl":"https://doi.org/10.1145/3649467","url":null,"abstract":"<p>Predicting future traffic conditions from urban sensor data is crucial for smart city applications. Recent traffic forecasting methods are derived from Spatio-Temporal Graph Convolution Networks (STGCNs). Despite their remarkable achievements, these spatio-temporal models have mainly been evaluated on small-scale datasets. In light of the rapid growth of the Internet of Things and urbanization, cities are witnessing an increased deployment of sensors, resulting in the collection of extensive sensor data to provide more accurate insights into citywide traffic dynamics. Spatio-temporal graph modeling on large-scale traffic data is challenging due to the memory constraint of the computing device. For traffic forecasting, subgraph sampling from road networks onto multiple devices is feasible. Many GCN sampling methods have been proposed recently. However, combining these with STGCNs degrades performance. This is primarily due to prediction biases introduced by each sampled subgraph, which analyze traffic states from a regional perspective. </p><p>Addressing these challenges, we introduce a parallel STGCN framework called PaSTG. PaSTG divides the road network into regions, each processed by an individual STGCN in a device. To mitigate regional biases, Aggregation Blocks in PaSTG merge spatial-temporal features from each STBlock. This collaboration enhances traffic forecasting. Furthermore, PaSTG implements pipeline parallelism and employs a graph partition algorithm for optimized pipeline efficiency. We evaluate PaSTG on various STGCNs using three traffic datasets on multiple GPUs. Results demonstrate that our parallel approach applies widely to diverse STGCN models, surpassing existing GCN samplers by up to 57.4% in prediction accuracy. Additionally, the parallel framework achieves speedups of up to 2.87x and 4.70x in training and inference compared to GCN samplers.</p>","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"74 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140003272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article introduces LinkStream, a liquidity analysis system based on multiple video streams designed and implemented for oilfield. LinkStream combines a variety of technologies to solve several problems in computing power and network latency. First, the system adopts an edge-central architecture and tailoring based on spatio-temporal correlation, which greatly reduces computing power requirements and network costs, and enables real-time analysis of large-scale video stream on limited edge devices. Second, it designed a set of liquidity information to describe the liquidity status in the oilfield. Finally, it uses object tracking technology to design a counting algorithm for the unique tubing object in the oilfield. We have deployed LinkStream in an oilfield in Iraq. LinkStream can perform realtime inference on over 200 video streams with acceptable resource overhead.
{"title":"A Liquidity Analysis System for Large-Scale Video Streams in the Oilfield","authors":"Qiang Ma, Hao Yuan, Zhe Hu, Xu Wang, Zheng Yang","doi":"10.1145/3649222","DOIUrl":"https://doi.org/10.1145/3649222","url":null,"abstract":"<p>This article introduces LinkStream, a liquidity analysis system based on multiple video streams designed and implemented for oilfield. LinkStream combines a variety of technologies to solve several problems in computing power and network latency. First, the system adopts an edge-central architecture and tailoring based on spatio-temporal correlation, which greatly reduces computing power requirements and network costs, and enables real-time analysis of large-scale video stream on limited edge devices. Second, it designed a set of liquidity information to describe the liquidity status in the oilfield. Finally, it uses object tracking technology to design a counting algorithm for the unique tubing object in the oilfield. We have deployed LinkStream in an oilfield in Iraq. LinkStream can perform realtime inference on over 200 video streams with acceptable resource overhead.</p>","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"17 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140003176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Android system has been widely deployed in energy-constrained IoT devices for many practical applications, such as smart phone, smart home, healthcare, fitness, and beacons. However, Android users oftentimes suffer from app crashes, which directly disrupt user experience and could lead to data loss. Till now, the community have limited understanding of their prevalence, characteristics, and root causes. In this paper, we make an in-depth study of the crash events regarding ten very popular apps of different genres, based on fine-grained system-level traces crowd-sourced from 93 million Android devices. We find that app crashes occur prevalently on the various hardware models studied, and better hardware does not seem to essentially relieve the problem. Most importantly, we unravel multi-fold root causes of app crashes, and pinpoint that the most crashes stem from the subtle yet crucial inconsistency between app developers’ supposed memory/process management model and Android’s actual implementations. We design practical approaches to addressing the inconsistency; after large-scale deployment, they reduce 40.4% of the app crashes with negligible system overhead. In addition, we summarize important lessons learned from this study, and have released our measurement code/data to the community.
{"title":"Who Should We Blame for Android App Crashes? An In-Depth Study at Scale and Practical Resolutions","authors":"Liangyi Gong, Hao Lin, Daibo Liu, Lanqi Yang, Hongyi Wang, Jiaxing Qiu, Zhenhua Li, Feng Qian","doi":"10.1145/3649895","DOIUrl":"https://doi.org/10.1145/3649895","url":null,"abstract":"<p>Android system has been widely deployed in energy-constrained IoT devices for many practical applications, such as smart phone, smart home, healthcare, fitness, and beacons. However, Android users oftentimes suffer from app crashes, which directly disrupt user experience and could lead to data loss. Till now, the community have limited understanding of their prevalence, characteristics, and root causes. In this paper, we make an in-depth study of the crash events regarding ten very popular apps of different genres, based on fine-grained system-level traces crowd-sourced from 93 million Android devices. We find that app crashes occur prevalently on the various hardware models studied, and better hardware does not seem to essentially relieve the problem. Most importantly, we unravel multi-fold root causes of app crashes, and pinpoint that the most crashes stem from the subtle yet crucial inconsistency between app developers’ supposed memory/process management model and Android’s actual implementations. We design practical approaches to addressing the inconsistency; after large-scale deployment, they reduce 40.4% of the app crashes with negligible system overhead. In addition, we summarize important lessons learned from this study, and have released our measurement code/data to the community.</p>","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"35 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140003449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pengfei Wang, Dian Jiao, Leyou Yang, Bin Wang, Ruiyun Yu
Mobile crowdsensing leverages the power of a vast group of participants to collect sensory data, thus presenting an economical solution for data collection. However, due to the variability among participants, the quality of sensory data varies significantly, making it crucial to extract truthful information from sensory data of differing quality. Additionally, given the fixed time and monetary costs for the participants, they typically only perform a subset of tasks. As a result, the datasets collected in real-world scenarios are usually sparse. Current truth discovery methods struggle to adapt to datasets with varying sparsity, especially when dealing with sparse datasets. In this paper, we propose an adaptive Hypergraph-based EM truth discovery method, HGEM. The HGEM algorithm leverages the topological characteristics of hypergraphs to model sparse datasets, thereby improving its performance in evaluating the reliability of participants and the true value of the event to be observed. Experiments based on simulated and real-world scenarios demonstrate that HGEM consistently achieves higher predictive accuracy.
移动众感应利用庞大的参与者群体来收集感官数据,从而为数据收集提供了一种经济的解决方案。然而,由于参与者之间存在差异,感官数据的质量也大不相同,因此从不同质量的感官数据中提取真实信息至关重要。此外,考虑到参与者的固定时间和金钱成本,他们通常只执行部分任务。因此,在现实世界中收集到的数据集通常比较稀少。当前的真相发现方法很难适应不同稀疏度的数据集,尤其是在处理稀疏数据集时。在本文中,我们提出了一种基于超图的自适应 EM 真相发现方法 HGEM。HGEM 算法利用超图的拓扑特性对稀疏数据集进行建模,从而提高了其在评估参与者可靠性和待观察事件真实值方面的性能。基于模拟和真实世界场景的实验证明,HGEM 始终能达到更高的预测准确性。
{"title":"Hypergraph-based Truth Discovery for Sparse Data in Mobile Crowdsensing","authors":"Pengfei Wang, Dian Jiao, Leyou Yang, Bin Wang, Ruiyun Yu","doi":"10.1145/3649894","DOIUrl":"https://doi.org/10.1145/3649894","url":null,"abstract":"<p>Mobile crowdsensing leverages the power of a vast group of participants to collect sensory data, thus presenting an economical solution for data collection. However, due to the variability among participants, the quality of sensory data varies significantly, making it crucial to extract truthful information from sensory data of differing quality. Additionally, given the fixed time and monetary costs for the participants, they typically only perform a subset of tasks. As a result, the datasets collected in real-world scenarios are usually sparse. Current truth discovery methods struggle to adapt to datasets with varying sparsity, especially when dealing with sparse datasets. In this paper, we propose an adaptive Hypergraph-based EM truth discovery method, HGEM. The HGEM algorithm leverages the topological characteristics of hypergraphs to model sparse datasets, thereby improving its performance in evaluating the reliability of participants and the true value of the event to be observed. Experiments based on simulated and real-world scenarios demonstrate that HGEM consistently achieves higher predictive accuracy.</p>","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"112 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140003275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Roufaida Laidi, Djamel Djenouri, Youcef Djenouri, Jerry Chun-Wei Lin
This study introduces an innovative method aimed at reducing energy consumption in sensor networks by predicting sensor data, thereby extending the network’s operational lifespan. Our model, TG-SPRED (Temporal Graph Sensor Prediction), predicts readings for a subset of sensors designated to enter sleep mode in each time slot, based on a non-scheduling-dependent approach. This flexibility allows for extended sensor inactivity periods without compromising data accuracy. TG-SPRED addresses the complexities of event-based sensing—a domain that has been somewhat overlooked in existing literature—by recognizing and leveraging the inherent temporal and spatial correlations among events. It combines the strengths of Gated Recurrent Units (GRUs) and Graph Convolutional Networks (GCN) to analyze temporal data and spatial relationships within the sensor network graph, where connections are defined by sensor proximities. An adversarial training mechanism, featuring a critic network employing the Wasserstein distance for performance measurement, further refines the predictive accuracy. Comparative analysis against six leading solutions using four critical metrics—F-score, energy consumption, network lifetime, and computational efficiency—showcases our approach’s superior performance in both accuracy and energy efficiency.
{"title":"TG-SPRED: Temporal Graph for Sensorial Data PREDiction","authors":"Roufaida Laidi, Djamel Djenouri, Youcef Djenouri, Jerry Chun-Wei Lin","doi":"10.1145/3649892","DOIUrl":"https://doi.org/10.1145/3649892","url":null,"abstract":"<p>This study introduces an innovative method aimed at reducing energy consumption in sensor networks by predicting sensor data, thereby extending the network’s operational lifespan. Our model, TG-SPRED (Temporal Graph Sensor Prediction), predicts readings for a subset of sensors designated to enter sleep mode in each time slot, based on a non-scheduling-dependent approach. This flexibility allows for extended sensor inactivity periods without compromising data accuracy. TG-SPRED addresses the complexities of event-based sensing—a domain that has been somewhat overlooked in existing literature—by recognizing and leveraging the inherent temporal and spatial correlations among events. It combines the strengths of Gated Recurrent Units (GRUs) and Graph Convolutional Networks (GCN) to analyze temporal data and spatial relationships within the sensor network graph, where connections are defined by sensor proximities. An adversarial training mechanism, featuring a critic network employing the Wasserstein distance for performance measurement, further refines the predictive accuracy. Comparative analysis against six leading solutions using four critical metrics—F-score, energy consumption, network lifetime, and computational efficiency—showcases our approach’s superior performance in both accuracy and energy efficiency.</p>","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"174 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140003280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The smart city is an increasingly popular concept when it comes to urban development. In a smart city, numerous sensor services are generated by IoT sensors in a distributed manner, requiring proper management and effective interaction to guarantee the connectivity of different regions. However, the sensitive nature of sensor data raises concerns over joining public cloud centers or edge servers, despite assurances of their reliability from providers. Local deployment and maintenance of sensor services may cause these service providers to become ”data isolated islands”, hindering the construction process of smart city. This paper proposes a distributed trustworthy sensor service network architecture named DTSSN to support the building of a fully distributed sensor service network. The proposed network architecture operates through the collaboration of two core devices, the sensor service switch and router, to effectively enable the registration, discovery, invocation, transaction, and monitoring of cross-region sensor services. Then, a lightweight trustworthy transaction mechanism based on blockchain is proposed to realize SLA-based automatic service transaction while reducing potential risks in the service network. Comparative analysis and simulation experiments validate the effectiveness of the DTSSN architecture in terms of scalability, availability, and trustworthiness, underscoring its potential in advancing smart city development and governance.
{"title":"DTSSN: A Distributed Trustworthy Sensor Service Network Architecture for Smart City","authors":"Shengye Pang, Jiayin Luo, Xinkui Zhao, Jintao Chen, Fan Wang, Jianwei Yin","doi":"10.1145/3649893","DOIUrl":"https://doi.org/10.1145/3649893","url":null,"abstract":"<p>The smart city is an increasingly popular concept when it comes to urban development. In a smart city, numerous sensor services are generated by IoT sensors in a distributed manner, requiring proper management and effective interaction to guarantee the connectivity of different regions. However, the sensitive nature of sensor data raises concerns over joining public cloud centers or edge servers, despite assurances of their reliability from providers. Local deployment and maintenance of sensor services may cause these service providers to become ”data isolated islands”, hindering the construction process of smart city. This paper proposes a distributed trustworthy sensor service network architecture named DTSSN to support the building of a fully distributed sensor service network. The proposed network architecture operates through the collaboration of two core devices, the sensor service switch and router, to effectively enable the registration, discovery, invocation, transaction, and monitoring of cross-region sensor services. Then, a lightweight trustworthy transaction mechanism based on blockchain is proposed to realize SLA-based automatic service transaction while reducing potential risks in the service network. Comparative analysis and simulation experiments validate the effectiveness of the DTSSN architecture in terms of scalability, availability, and trustworthiness, underscoring its potential in advancing smart city development and governance.</p>","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"46 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140011026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hongwei Du, Jingfang Su, Zhao Zhang, Zhenhua Duan, Cong Tian, Ding-Zhu Du
The study focuses on achieving full view coverage in a camera sensor network to effectively monitor moving objects from multiple perspectives. Three key issues are addressed: camera direction selection, location selection, and moving object monitoring. There are three steps to maximize coverage of moving targets. The first step involves proposing the Maximum Group Set Coverage (MGSC) algorithm, which selects the camera sensor direction for traditional target coverage. In the second step, a composed target merged from a set of fixed directional targets represents multiple views of a moving object. Building upon the MGSC algorithm, the Maximum Group Set Coverage with Composed Targets (MGSC-CT) algorithm is presented to determine camera sensor directions that cover subsets of fixed directional targets. Additionally, a constraint on the number of cameras is imposed for camera location selection, leading to the study of the Maximum Group Set Coverage with Size Constraint (MGSC-SC) algorithm. Each of these steps formulates a problem on group set coverage and provides an algorithmic solution. Furthermore, improved versions of MGSC-CT and MGSC-SC are developed to enhance the coverage speed. Computer simulations are employed to demonstrate the significant performance of the algorithms.
{"title":"Full View Maximum Coverage of Camera Sensors: Moving Object Monitoring","authors":"Hongwei Du, Jingfang Su, Zhao Zhang, Zhenhua Duan, Cong Tian, Ding-Zhu Du","doi":"10.1145/3649314","DOIUrl":"https://doi.org/10.1145/3649314","url":null,"abstract":"<p>The study focuses on achieving full view coverage in a camera sensor network to effectively monitor moving objects from multiple perspectives. Three key issues are addressed: camera direction selection, location selection, and moving object monitoring. There are three steps to maximize coverage of moving targets. The first step involves proposing the Maximum Group Set Coverage (MGSC) algorithm, which selects the camera sensor direction for traditional target coverage. In the second step, a composed target merged from a set of fixed directional targets represents multiple views of a moving object. Building upon the MGSC algorithm, the Maximum Group Set Coverage with Composed Targets (MGSC-CT) algorithm is presented to determine camera sensor directions that cover subsets of fixed directional targets. Additionally, a constraint on the number of cameras is imposed for camera location selection, leading to the study of the Maximum Group Set Coverage with Size Constraint (MGSC-SC) algorithm. Each of these steps formulates a problem on group set coverage and provides an algorithmic solution. Furthermore, improved versions of MGSC-CT and MGSC-SC are developed to enhance the coverage speed. Computer simulations are employed to demonstrate the significant performance of the algorithms.</p>","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"37 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140003168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}