With the rapid growth in the world population, developing agricultural technologies has been an urgent need. Sensor networks have been widely used to monitor and manage agricultural status. Moreover, Artificial Intelligence (AI) techniques are adopted for their high accuracy to enable the analysis of massive data collected through the sensor network. The datasets on the devices of agricultural applications usually need to be completed and bigger, which limits the performance of AI algorithms. Thus, researchers turn to Collaborative Learning (CL) to utilize the data on multiple devices to train a global model privately. However, current CL frameworks for agricultural applications suffer from three problems: data heterogeneity, system heterogeneity, and communication overhead. In this paper, we propose cloud-based Collaborative Agricultural Learning with Flexible model size and Adaptive batch number (CALFA) to improve the efficiency and applicability of the training process while maintaining its effectiveness. CALFA contains three modules. The Classification Pyramid allows the devices to use different sizes of models during training and enables the classification of different object sizes. Adaptive Aggregation modifies the aggregation weights to maintain the convergence speed and accuracy. Adaptive Adjustment modifies the training batch numbers to mitigate the communication overhead. The experimental results illustrate that CALFA outperforms other SOTA CL frameworks by reducing up to 75% communication overhead with nearly no accuracy loss. Also, CALFA enables training on more devices by reducing the model size.
{"title":"Cloud-based Collaborative Agricultural Learning with Flexible Model Size and Adaptive Batch Number","authors":"Hongjian Shi, Ilyas Bayanbayev, Wenkai Zheng, Ruhui Ma, Haibing Guan","doi":"10.1145/3628431","DOIUrl":"https://doi.org/10.1145/3628431","url":null,"abstract":"With the rapid growth in the world population, developing agricultural technologies has been an urgent need. Sensor networks have been widely used to monitor and manage agricultural status. Moreover, Artificial Intelligence (AI) techniques are adopted for their high accuracy to enable the analysis of massive data collected through the sensor network. The datasets on the devices of agricultural applications usually need to be completed and bigger, which limits the performance of AI algorithms. Thus, researchers turn to Collaborative Learning (CL) to utilize the data on multiple devices to train a global model privately. However, current CL frameworks for agricultural applications suffer from three problems: data heterogeneity, system heterogeneity, and communication overhead. In this paper, we propose cloud-based Collaborative Agricultural Learning with Flexible model size and Adaptive batch number (CALFA) to improve the efficiency and applicability of the training process while maintaining its effectiveness. CALFA contains three modules. The Classification Pyramid allows the devices to use different sizes of models during training and enables the classification of different object sizes. Adaptive Aggregation modifies the aggregation weights to maintain the convergence speed and accuracy. Adaptive Adjustment modifies the training batch numbers to mitigate the communication overhead. The experimental results illustrate that CALFA outperforms other SOTA CL frameworks by reducing up to 75% communication overhead with nearly no accuracy loss. Also, CALFA enables training on more devices by reducing the model size.","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"14 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135510910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deep learning–based video analytics demands high network bandwidth to ferry the large volume of data when deployed on the cloud. When incorporated at the edge side, only lightweight deep neural network (DNN) models are affordable due to computational constraint. In this article, a cloud–edge collaborative architecture is proposed combining edge-based inference with cloud-assisted continuous learning. Lightweight DNN models are maintained at the edge servers and continuously retrained with a more comprehensive model on the cloud to achieve high video analytics performance while reducing the amount of data transmitted between edge servers and the cloud. The proposed design faces the challenge of constraints of both computation resources at the edge servers and network bandwidth of the edge–cloud links. An accuracy gradient-based resource allocation algorithm is proposed to allocate the limited computation and network resources across different video streams to achieve the maximum overall performance. A prototype system is implemented and experiment results demonstrate the effectiveness of our system with up to 28.6% absolute mean average precision gain compared with alternative designs.
{"title":"Large-Scale Video Analytics with Cloud-Edge Collaborative Continuous Learning","authors":"Ya Nan, Shiqi Jiang, Mo Li","doi":"10.1145/3624478","DOIUrl":"https://doi.org/10.1145/3624478","url":null,"abstract":"Deep learning–based video analytics demands high network bandwidth to ferry the large volume of data when deployed on the cloud. When incorporated at the edge side, only lightweight deep neural network (DNN) models are affordable due to computational constraint. In this article, a cloud–edge collaborative architecture is proposed combining edge-based inference with cloud-assisted continuous learning. Lightweight DNN models are maintained at the edge servers and continuously retrained with a more comprehensive model on the cloud to achieve high video analytics performance while reducing the amount of data transmitted between edge servers and the cloud. The proposed design faces the challenge of constraints of both computation resources at the edge servers and network bandwidth of the edge–cloud links. An accuracy gradient-based resource allocation algorithm is proposed to allocate the limited computation and network resources across different video streams to achieve the maximum overall performance. A prototype system is implemented and experiment results demonstrate the effectiveness of our system with up to 28.6% absolute mean average precision gain compared with alternative designs.","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"10 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135513521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shuai Wang, Luoyu Mei, Zhimeng Yin, Hao Li, Ruofeng Liu, Wenchao Jiang, Chris Xiaoxuan Lu
The successful operation of autonomous vehicles hinges on their ability to accurately identify objects in their vicinity, particularly living targets such as bikers and pedestrians. However, visual interference inherent in real-world environments, such as omnipresent billboards, poses substantial challenges to extant vision-based detection technologies. These visual interference exhibit similar visual attributes to living targets, leading to erroneous identification. We address this problem by harnessing the capabilities of mmWave radar, a vital sensor in autonomous vehicles, in combination with vision technology, thereby contributing a unique solution for liveness target detection. We propose a methodology that extracts features from the mmWave radar signal to achieve end-to-end liveness target detection by integrating the mmWave radar and vision technology. This proposed methodology is implemented and evaluated on the commodity mmWave radar IWR6843ISK-ODS and vision sensor Logitech camera. Our extensive evaluation reveals that the proposed method accomplishes liveness target detection with a mean average precision (mAP) of 98.1%, surpassing the performance of existing studies.
{"title":"End-to-End Target Liveness Detection via mmWave Radar and Vision Fusion for Autonomous Vehicles","authors":"Shuai Wang, Luoyu Mei, Zhimeng Yin, Hao Li, Ruofeng Liu, Wenchao Jiang, Chris Xiaoxuan Lu","doi":"10.1145/3628453","DOIUrl":"https://doi.org/10.1145/3628453","url":null,"abstract":"The successful operation of autonomous vehicles hinges on their ability to accurately identify objects in their vicinity, particularly living targets such as bikers and pedestrians. However, visual interference inherent in real-world environments, such as omnipresent billboards, poses substantial challenges to extant vision-based detection technologies. These visual interference exhibit similar visual attributes to living targets, leading to erroneous identification. We address this problem by harnessing the capabilities of mmWave radar, a vital sensor in autonomous vehicles, in combination with vision technology, thereby contributing a unique solution for liveness target detection. We propose a methodology that extracts features from the mmWave radar signal to achieve end-to-end liveness target detection by integrating the mmWave radar and vision technology. This proposed methodology is implemented and evaluated on the commodity mmWave radar IWR6843ISK-ODS and vision sensor Logitech camera. Our extensive evaluation reveals that the proposed method accomplishes liveness target detection with a mean average precision (mAP) of 98.1%, surpassing the performance of existing studies.","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"243 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135824773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Most existing multi-user Augmented Reality (AR) systems only support multiple co-located users to view a common set of virtual objects but lack the ability to enable each user to directly interact with other users appearing in his/her view. Such multi-user AR systems should be able to detect the human keypoints and estimate device poses (for identifying different users) in the meanwhile. However, due to the stringent low latency requirements and the intensive computation of the above two capabilities, previous research only enables either of the two capabilities for mobile devices even with the aid of the edge server. Integrating the above two capabilities is promising but non-trivial in terms of latency, accuracy, and matching. To fill this gap, we propose DiTing to achieve real-time ID-aware multi-device visual interaction for multi-user AR applications, which contains three key innovations: Shared On-device Tracking to merge the similar computation for optimized latency, Tightly Coupled Dual Pipeline to enhance the accuracy of each task through mutual assistance, Body Affinity Particle Filter to precisely match device poses with human bodies. We implement DiTing on four types of mobile AR devices and develop a multi-user AR game as a case study. Extensive experiments show that DiTing can provide high-quality human keypoint detection and pose estimation in real-time (30fps) for ID-aware multi-device interaction and outperform the SOTA baseline approaches.
{"title":"Multi-User Mobile Augmented Reality with ID-aware Visual Interaction","authors":"Xinjun Cai, Zheng Yang, Liang Dong, Qiang Ma, Xin Miao, Zhuo Liu","doi":"10.1145/3623638","DOIUrl":"https://doi.org/10.1145/3623638","url":null,"abstract":"Most existing multi-user Augmented Reality (AR) systems only support multiple co-located users to view a common set of virtual objects but lack the ability to enable each user to directly interact with other users appearing in his/her view. Such multi-user AR systems should be able to detect the human keypoints and estimate device poses (for identifying different users) in the meanwhile. However, due to the stringent low latency requirements and the intensive computation of the above two capabilities, previous research only enables either of the two capabilities for mobile devices even with the aid of the edge server. Integrating the above two capabilities is promising but non-trivial in terms of latency, accuracy, and matching. To fill this gap, we propose DiTing to achieve real-time ID-aware multi-device visual interaction for multi-user AR applications, which contains three key innovations: Shared On-device Tracking to merge the similar computation for optimized latency, Tightly Coupled Dual Pipeline to enhance the accuracy of each task through mutual assistance, Body Affinity Particle Filter to precisely match device poses with human bodies. We implement DiTing on four types of mobile AR devices and develop a multi-user AR game as a case study. Extensive experiments show that DiTing can provide high-quality human keypoint detection and pose estimation in real-time (30fps) for ID-aware multi-device interaction and outperform the SOTA baseline approaches.","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136012889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As one of the most commonly used AIoT sensors, smart cameras and their supporting services, namely cloud video surveillance (CVS) systems have brought great convenience to people’s lives. Recent CVS providers use different machine learning (ML) techniques to improve their services (regarded as tasks) based on the uploaded video. However, uploading data to the CVS providers may cause severe privacy issues. Existing works that remove privacy information could not achieve a high trade-off between data usability and privacy because the importance of information varies with the task. In addition, it is challenging to design a real-time privacy protection mechanism, especially in resource-constraint smart cameras. In this work, we design a task-driven and efficient video privacy protection mechanism for a better trade-off between privacy and data usability. We use Class Activation Mapping to protect privacy while preserving data usability. To improve the efficiency, we utilize the motion vector and residual matrix produced during video codec. Our work outperforms the ROI-based methods in data protection while preserving data usability. The attack accuracy drops 70%, while the task accuracy is comparable to those without protection (within ± 4%). The average protection frame rate of the High Definition video can exceed 16 fps+ even on a CPU.
{"title":"Efficient Task-Driven Video Data Privacy Protection for Smart Camera Surveillance System","authors":"Zhiqiang Wang, Jiahui Hou, Guangyu Wu, Suyuan Liu, Puhan Luo, Xiangyang Li","doi":"10.1145/3625825","DOIUrl":"https://doi.org/10.1145/3625825","url":null,"abstract":"As one of the most commonly used AIoT sensors, smart cameras and their supporting services, namely cloud video surveillance (CVS) systems have brought great convenience to people’s lives. Recent CVS providers use different machine learning (ML) techniques to improve their services (regarded as tasks) based on the uploaded video. However, uploading data to the CVS providers may cause severe privacy issues. Existing works that remove privacy information could not achieve a high trade-off between data usability and privacy because the importance of information varies with the task. In addition, it is challenging to design a real-time privacy protection mechanism, especially in resource-constraint smart cameras. In this work, we design a task-driven and efficient video privacy protection mechanism for a better trade-off between privacy and data usability. We use Class Activation Mapping to protect privacy while preserving data usability. To improve the efficiency, we utilize the motion vector and residual matrix produced during video codec. Our work outperforms the ROI-based methods in data protection while preserving data usability. The attack accuracy drops 70%, while the task accuracy is comparable to those without protection (within ± 4%). The average protection frame rate of the High Definition video can exceed 16 fps+ even on a CPU.","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135830524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhiyi Zhou, Lei Wang, Xinxin Lu, Yu Tian, Jian Fang, Bingxian Lu
Gait is regarded as a unique feature for identifying people, and gait recognition is the basis of various customized services of the IoT. Unlike traditional techniques for identifying people, the Wi-Fi-based technique is unconstrained by illumination conditions and such that it eliminates the need for dense, specialized sensors and wearable devices. Although deep learning-based sensing models are conducive to the development of Wi-Fi-based identification, the latter technique relies on a large amount of data and requires a long training time, where this limits the scope of its use for identifying people. In this study, we propose a Wi-Fi sensing model called Wave-CapNet for human identification. We use data processing to eliminate errors in the raw data so that the model can extract the characteristics in channel state information (CSI). We also design a dedicated adaptive wavelet neural network to extract representative features from Wi-Fi signals with only a few epochs of training and a small number of parameters. Experiments show that it can identify human gait with an average accuracy of 99%. Moreover, it can achieve an average accuracy of 95% by using only 10% of the data and fewer than five epochs, and outperforms state-of-the-art (SOTA) methods.
{"title":"Wave-CapNet: A Wavelet Neuron-Based Wi-Fi Sensing Model for Human Identification","authors":"Zhiyi Zhou, Lei Wang, Xinxin Lu, Yu Tian, Jian Fang, Bingxian Lu","doi":"10.1145/3624746","DOIUrl":"https://doi.org/10.1145/3624746","url":null,"abstract":"Gait is regarded as a unique feature for identifying people, and gait recognition is the basis of various customized services of the IoT. Unlike traditional techniques for identifying people, the Wi-Fi-based technique is unconstrained by illumination conditions and such that it eliminates the need for dense, specialized sensors and wearable devices. Although deep learning-based sensing models are conducive to the development of Wi-Fi-based identification, the latter technique relies on a large amount of data and requires a long training time, where this limits the scope of its use for identifying people. In this study, we propose a Wi-Fi sensing model called Wave-CapNet for human identification. We use data processing to eliminate errors in the raw data so that the model can extract the characteristics in channel state information (CSI). We also design a dedicated adaptive wavelet neural network to extract representative features from Wi-Fi signals with only a few epochs of training and a small number of parameters. Experiments show that it can identify human gait with an average accuracy of 99%. Moreover, it can achieve an average accuracy of 95% by using only 10% of the data and fewer than five epochs, and outperforms state-of-the-art (SOTA) methods.","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135015018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cardiac patterns are being used to provide hard-to-forge biometric signatures in identification applications. However, this performance is obtained under controlled scenarios where cardiac signals maintain a relatively uniform pattern, facilitating the identification process. In this work, we analyze cardiac signals collected in more realistic (uncontrolled) scenarios and show that their high signal variability makes them harder to obtain stable and distinct features. When faced with these irregular signals, the state-of-the-art (SOTA) reduces its performance significantly. To solve these problems, we propose the CardioID framework 1 with two novel properties. First, we design an adaptive method that achieves stable and distinct features by tailoring the filtering process according to each user’s heart rate. Second, we show that users can have multiple cardiac morphologies, offering us a bigger pool of cardiac signals compared to the SOTA. Considering three uncontrolled datasets, our evaluation shows two main insights. First, while using a PPG sensor with healthy individuals, the SOTA’s balanced accuracy (BAC) reduces from 90-95% to 75-80%, while our method maintains a BAC above 90%. Second, under more challenging conditions (using smartphone cameras or monitoring unhealthy individuals), the SOTA’s BAC reduces to values between 65-75%, and our method increases the BAC to values between 75-85%.
{"title":"Taming Irregular Cardiac Signals for Biometric Identification","authors":"Weizheng Wang, Qing Wang, Marco Zuniga","doi":"10.1145/3624570","DOIUrl":"https://doi.org/10.1145/3624570","url":null,"abstract":"Cardiac patterns are being used to provide hard-to-forge biometric signatures in identification applications. However, this performance is obtained under controlled scenarios where cardiac signals maintain a relatively uniform pattern, facilitating the identification process. In this work, we analyze cardiac signals collected in more realistic (uncontrolled) scenarios and show that their high signal variability makes them harder to obtain stable and distinct features. When faced with these irregular signals, the state-of-the-art (SOTA) reduces its performance significantly. To solve these problems, we propose the CardioID framework 1 with two novel properties. First, we design an adaptive method that achieves stable and distinct features by tailoring the filtering process according to each user’s heart rate. Second, we show that users can have multiple cardiac morphologies, offering us a bigger pool of cardiac signals compared to the SOTA. Considering three uncontrolled datasets, our evaluation shows two main insights. First, while using a PPG sensor with healthy individuals, the SOTA’s balanced accuracy (BAC) reduces from 90-95% to 75-80%, while our method maintains a BAC above 90%. Second, under more challenging conditions (using smartphone cameras or monitoring unhealthy individuals), the SOTA’s BAC reduces to values between 65-75%, and our method increases the BAC to values between 75-85%.","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135397048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Local differential privacy (LDP) is a promising privacy model for distributed data collection. It has been widely deployed in real-world systems (e.g. Chrome, iOS, macOS). In LDP-based mechanisms, an aggregator collects private values perturbed by each user and then analyses these values to estimate their statistics, such as frequency and mean. Most existing works focus on simple scalar value types, such as boolean and categorical values. However, with the emergence of smart sensors and Internet of Things, high-dimensional data are gaining increasing popularity. In many cases where more than one type of sensor data are collected simultaneously, correlations exist between various attributes of such data, e.g. temperature and luminance. To ensure LDP for high-dimensional data, existing solutions either partition the privacy budget ϵ among these correlated attributes or adopt sampling, both of which dilute the density of useful information and thus result in poor data utility. In this paper, we propose a relaxed LDP model, namely, univariate dominance local differential privacy (UDLDP), for high-dimensional data. We quantify the correlations between attributes and present a correlation-bounded perturbation (CBP) mechanism that optimizes the partitioning of privacy budget on each correlated attribute. Furthermore, we extend CBP to support sampling, which is a common bandwidth reduction technique in sensor networks and Internet of Things. We derive the best allocation strategy of sampling probabilities among attributes in terms of data utility, which leads to the correlation-bounded perturbation mechanism with sampling (CBPS). Finally, we discuss how to collect and leverage the correlation from real-time data stream with a by-round algorithm to enhance the utility. The performance of the proposed mechanisms is evaluated and compared with state-of-the-art LDP mechanisms on real-world and synthetic datasets.
{"title":"Collecting Multi-type and Correlation-Constrained Streaming Sensor Data with Local Differential Privacy","authors":"Yue Fu, Qingqing Ye, Rong Du, Haibo Hu","doi":"10.1145/3623637","DOIUrl":"https://doi.org/10.1145/3623637","url":null,"abstract":"Local differential privacy (LDP) is a promising privacy model for distributed data collection. It has been widely deployed in real-world systems (e.g. Chrome, iOS, macOS). In LDP-based mechanisms, an aggregator collects private values perturbed by each user and then analyses these values to estimate their statistics, such as frequency and mean. Most existing works focus on simple scalar value types, such as boolean and categorical values. However, with the emergence of smart sensors and Internet of Things, high-dimensional data are gaining increasing popularity. In many cases where more than one type of sensor data are collected simultaneously, correlations exist between various attributes of such data, e.g. temperature and luminance. To ensure LDP for high-dimensional data, existing solutions either partition the privacy budget ϵ among these correlated attributes or adopt sampling, both of which dilute the density of useful information and thus result in poor data utility. In this paper, we propose a relaxed LDP model, namely, univariate dominance local differential privacy (UDLDP), for high-dimensional data. We quantify the correlations between attributes and present a correlation-bounded perturbation (CBP) mechanism that optimizes the partitioning of privacy budget on each correlated attribute. Furthermore, we extend CBP to support sampling, which is a common bandwidth reduction technique in sensor networks and Internet of Things. We derive the best allocation strategy of sampling probabilities among attributes in terms of data utility, which leads to the correlation-bounded perturbation mechanism with sampling (CBPS). Finally, we discuss how to collect and leverage the correlation from real-time data stream with a by-round algorithm to enhance the utility. The performance of the proposed mechanisms is evaluated and compared with state-of-the-art LDP mechanisms on real-world and synthetic datasets.","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135740148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cooperative edge caching enables edge servers to jointly utilize their cache to store popular contents, thus drastically reducing the latency of content acquisition. One fundamental problem of cooperative caching is how to coordinate the cache replacement decisions at edge servers to meet users’ dynamic requirements and avoid caching redundant contents. Online deep reinforcement learning (DRL) is a promising way to solve this problem by learning a cooperative cache replacement policy using continuous interactions (trial and error) with the environment. However, the sampling process of the interactions is usually expensive and time-consuming, thus hindering the practical deployment of online DRL-based methods. To bridge this gap, we propose a novel Delay-awarE Cooperative cache replacement method based on Offline deep Reinforcement learning (DECOR), which can exploit the existing data at the mobile edge to train an effective policy while avoiding expensive data sampling in the environment. A specific convolutional neural network is also developed to improve the training efficiency and cache performance. Experimental results show that DECOR can learn a superior offline policy from a static dataset compared to an advanced online DRL-based method. Moreover, the learned offline policy outperforms the behavior policy used to collect the dataset by up to 35.9%.
{"title":"Intelligent Cooperative Caching at Mobile Edge based on Offline Deep Reinforcement Learning","authors":"Zhe Wang, Jia Hu, Geyong Min, Zhiwei Zhao","doi":"10.1145/3623398","DOIUrl":"https://doi.org/10.1145/3623398","url":null,"abstract":"Cooperative edge caching enables edge servers to jointly utilize their cache to store popular contents, thus drastically reducing the latency of content acquisition. One fundamental problem of cooperative caching is how to coordinate the cache replacement decisions at edge servers to meet users’ dynamic requirements and avoid caching redundant contents. Online deep reinforcement learning (DRL) is a promising way to solve this problem by learning a cooperative cache replacement policy using continuous interactions (trial and error) with the environment. However, the sampling process of the interactions is usually expensive and time-consuming, thus hindering the practical deployment of online DRL-based methods. To bridge this gap, we propose a novel Delay-awarE Cooperative cache replacement method based on Offline deep Reinforcement learning (DECOR), which can exploit the existing data at the mobile edge to train an effective policy while avoiding expensive data sampling in the environment. A specific convolutional neural network is also developed to improve the training efficiency and cache performance. Experimental results show that DECOR can learn a superior offline policy from a static dataset compared to an advanced online DRL-based method. Moreover, the learned offline policy outperforms the behavior policy used to collect the dataset by up to 35.9%.","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":" ","pages":""},"PeriodicalIF":4.1,"publicationDate":"2023-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44345615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The growing popularity of contact-free smart sensing has contributed to the development of the Artificial Intelligence of Things (AIoT). The contact-free sensory data has great potential to mine and analyze the hidden information for AIoT-enabled applications. However, due to the limited storage resource of contact-free smart sensing devices, data is naturally stored in the cloud, which is at risk of privacy leakage. Cloud storage is generally considered insecure. On the one hand, the openness of the cloud environment makes the data easy to be attacked, and the complex AIoT environment also makes the data transmission process vulnerable to the third party. On the other hand, the Cloud Service Provider (CSP) is untrusted. In this paper, to ensure the security of data from contact-free smart sensing devices, a Cloud-Edge-End cooperative storage scheme is proposed, which takes full advantage of the differences in the cloud, edge, and end. Firstly, the processed sensory data is stored separately in the three layers by utilizing well-designed data partitioning strategy. This scheme can increase the difficulty of privacy leakage in the transmission process and avoid internal and external attacks. Besides, the contact-free sensory data is highly time-dependent. Therefore, combined with the Cloud-Edge-End cooperation model, this paper proposes a delta-based data update method and extends it into a hybrid update mode to improve the synchronization efficiency. Theoretical analysis and experimental results show that the proposed cooperative storage method can resist various security threats in bad situations and outperform other update methods in synchronization efficiency, significantly reducing the synchronization overhead in AIoT.
{"title":"Privacy-Enhanced Cooperative Storage Scheme for Contact-free Sensory Data in AIoT with Efficient Synchronization","authors":"Yaxin Mei, Wenhua Wang, Yuzhu Liang, Qin Liu, Shuhong Chen, Tian Wang","doi":"10.1145/3617998","DOIUrl":"https://doi.org/10.1145/3617998","url":null,"abstract":"The growing popularity of contact-free smart sensing has contributed to the development of the Artificial Intelligence of Things (AIoT). The contact-free sensory data has great potential to mine and analyze the hidden information for AIoT-enabled applications. However, due to the limited storage resource of contact-free smart sensing devices, data is naturally stored in the cloud, which is at risk of privacy leakage. Cloud storage is generally considered insecure. On the one hand, the openness of the cloud environment makes the data easy to be attacked, and the complex AIoT environment also makes the data transmission process vulnerable to the third party. On the other hand, the Cloud Service Provider (CSP) is untrusted. In this paper, to ensure the security of data from contact-free smart sensing devices, a Cloud-Edge-End cooperative storage scheme is proposed, which takes full advantage of the differences in the cloud, edge, and end. Firstly, the processed sensory data is stored separately in the three layers by utilizing well-designed data partitioning strategy. This scheme can increase the difficulty of privacy leakage in the transmission process and avoid internal and external attacks. Besides, the contact-free sensory data is highly time-dependent. Therefore, combined with the Cloud-Edge-End cooperation model, this paper proposes a delta-based data update method and extends it into a hybrid update mode to improve the synchronization efficiency. Theoretical analysis and experimental results show that the proposed cooperative storage method can resist various security threats in bad situations and outperform other update methods in synchronization efficiency, significantly reducing the synchronization overhead in AIoT.","PeriodicalId":50910,"journal":{"name":"ACM Transactions on Sensor Networks","volume":" ","pages":""},"PeriodicalIF":4.1,"publicationDate":"2023-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42572630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}