Pub Date : 2023-05-17DOI: 10.1109/INFOCOM53939.2023.10228976
Yue Wu, Jingao Xu, Danyang Li, Yadong Xie, Hao Cao, Fan Li, Zheng Yang
Location awareness in environments is one of the key parts for drones’ applications and have been explored through various visual sensors. However, standard cameras easily suffer from motion blur under high moving speeds and low-quality image under poor illumination, which brings challenges for drones to perform motion tracking. Recently, a kind of bio-inspired sensors called event cameras emerge, offering advantages like high temporal resolution, high dynamic range and low latency, which motivate us to explore their potential to perform motion tracking in limited scenarios. In this paper, we propose FlyTracker, aiming at developing visual sensing ability for drones of both individual and circumambient location-relevant contextual, by using a monocular event camera. In FlyTracker, background-subtraction-based method is proposed to distinguish moving objects from background and fusion-based photometric features are carefully designed to obtain motion information. Through multilevel fusion of events and images, which are heterogeneous visual data, FlyTracker can effectively and reliably track the 6-DoF pose of the drone as well as monitor relative positions of moving obstacles. We evaluate performance of FlyTracker in different environments and the results show that FlyTracker is more accurate than the state-of-the-art baselines.
{"title":"FlyTracker: Motion Tracking and Obstacle Detection for Drones Using Event Cameras","authors":"Yue Wu, Jingao Xu, Danyang Li, Yadong Xie, Hao Cao, Fan Li, Zheng Yang","doi":"10.1109/INFOCOM53939.2023.10228976","DOIUrl":"https://doi.org/10.1109/INFOCOM53939.2023.10228976","url":null,"abstract":"Location awareness in environments is one of the key parts for drones’ applications and have been explored through various visual sensors. However, standard cameras easily suffer from motion blur under high moving speeds and low-quality image under poor illumination, which brings challenges for drones to perform motion tracking. Recently, a kind of bio-inspired sensors called event cameras emerge, offering advantages like high temporal resolution, high dynamic range and low latency, which motivate us to explore their potential to perform motion tracking in limited scenarios. In this paper, we propose FlyTracker, aiming at developing visual sensing ability for drones of both individual and circumambient location-relevant contextual, by using a monocular event camera. In FlyTracker, background-subtraction-based method is proposed to distinguish moving objects from background and fusion-based photometric features are carefully designed to obtain motion information. Through multilevel fusion of events and images, which are heterogeneous visual data, FlyTracker can effectively and reliably track the 6-DoF pose of the drone as well as monitor relative positions of moving obstacles. We evaluate performance of FlyTracker in different environments and the results show that FlyTracker is more accurate than the state-of-the-art baselines.","PeriodicalId":387707,"journal":{"name":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126222988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-17DOI: 10.1109/INFOCOM53939.2023.10228993
Yinxin Wan, Xuanli Lin, Kuai Xu, Feng Wang, G. Xue
Smart home IoT devices have been widely deployed and connected to many home networks for various applications such as intelligent home automation, connected healthcare, and security surveillance. The network traffic traces generated by IoT devices have enabled recent research advances in smart home network measurement. However, due to the cloud-based communication model of smart home IoT devices and the lack of traffic data collected at the cloud end, little effort has been devoted to extracting the spatial information of IoT device events to determine where a device event is triggered. In this paper, we examine why extracting IoT device events’ spatial information is challenging by analyzing the communication model of the smart home IoT system. We propose a system named IoTDuet for determining whether a device event is triggered locally or remotely by utilizing the fact that the controlling devices such as smartphones and tablets always communicate with cloud servers with relatively stable domain name information when issuing commands from the home network. We further show the importance of extracting spatial information of IoT device events by exploring its applications in smart home safety monitoring.
{"title":"Extracting Spatial Information of IoT Device Events for Smart Home Safety Monitoring","authors":"Yinxin Wan, Xuanli Lin, Kuai Xu, Feng Wang, G. Xue","doi":"10.1109/INFOCOM53939.2023.10228993","DOIUrl":"https://doi.org/10.1109/INFOCOM53939.2023.10228993","url":null,"abstract":"Smart home IoT devices have been widely deployed and connected to many home networks for various applications such as intelligent home automation, connected healthcare, and security surveillance. The network traffic traces generated by IoT devices have enabled recent research advances in smart home network measurement. However, due to the cloud-based communication model of smart home IoT devices and the lack of traffic data collected at the cloud end, little effort has been devoted to extracting the spatial information of IoT device events to determine where a device event is triggered. In this paper, we examine why extracting IoT device events’ spatial information is challenging by analyzing the communication model of the smart home IoT system. We propose a system named IoTDuet for determining whether a device event is triggered locally or remotely by utilizing the fact that the controlling devices such as smartphones and tablets always communicate with cloud servers with relatively stable domain name information when issuing commands from the home network. We further show the importance of extracting spatial information of IoT device events by exploring its applications in smart home safety monitoring.","PeriodicalId":387707,"journal":{"name":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121097628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-17DOI: 10.1109/INFOCOM53939.2023.10228900
Yin-Hae Huang, Letian Zhang, J. Xu
Multi-armed bandits is a classical sequential decision-making under uncertainty problem. The majority of existing works study bandits problems in either the stochastic reward regime or the adversarial reward regime, but the intersection of these two regimes is much less investigated. In this paper, we study a new bandits problem, called adversarial group linear bandits (AGLB), that features reward generation as a joint outcome of both the stochastic process and the adversarial behavior. In particular, the reward that the learner receives is not only a noisy linear function of the arm that the learner selects within a group but also depends on the group-level attack decision by the adversary. Such problems are present in many real-world applications, e.g., collaborative edge inference and multi-site online ad placement. To combat the uncertainty in the coupled stochastic and adversarial rewards, we develop a new bandits algorithm, called EXPUCB, which marries the classical LinUCB and EXP3 algorithms, and prove its sublinear regret. We apply EXPUCB to the collaborative edge inference problem and evaluate its performance. Extensive simulation results verify the superior learning ability of EXPUCB under coupled stochastic and adversarial rewards.
{"title":"Adversarial Group Linear Bandits and Its Application to Collaborative Edge Inference","authors":"Yin-Hae Huang, Letian Zhang, J. Xu","doi":"10.1109/INFOCOM53939.2023.10228900","DOIUrl":"https://doi.org/10.1109/INFOCOM53939.2023.10228900","url":null,"abstract":"Multi-armed bandits is a classical sequential decision-making under uncertainty problem. The majority of existing works study bandits problems in either the stochastic reward regime or the adversarial reward regime, but the intersection of these two regimes is much less investigated. In this paper, we study a new bandits problem, called adversarial group linear bandits (AGLB), that features reward generation as a joint outcome of both the stochastic process and the adversarial behavior. In particular, the reward that the learner receives is not only a noisy linear function of the arm that the learner selects within a group but also depends on the group-level attack decision by the adversary. Such problems are present in many real-world applications, e.g., collaborative edge inference and multi-site online ad placement. To combat the uncertainty in the coupled stochastic and adversarial rewards, we develop a new bandits algorithm, called EXPUCB, which marries the classical LinUCB and EXP3 algorithms, and prove its sublinear regret. We apply EXPUCB to the collaborative edge inference problem and evaluate its performance. Extensive simulation results verify the superior learning ability of EXPUCB under coupled stochastic and adversarial rewards.","PeriodicalId":387707,"journal":{"name":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121207742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-17DOI: 10.1109/INFOCOM53939.2023.10228955
Yu Sun, Chi Lin, Wei Yang, Jiankang Ren, Lei Wang, Guowei Wu, Qiang Zhang
As a novel solution for IoT applications, wireless rechargeable sensor networks (WRSNs) have achieved widespread deployment in recent years. Existing WRSN scheduling methods have focused extensively on maximizing the network charging utility in the fixed node case. However, when sensor nodes are deployed in dynamic environments (e.g., maritime environments) where sensors move randomly over time, existing approaches are likely to incur significant performance loss or even fail to execute normally. In this work, we focus on serving dynamic nodes whose locations vary randomly and formalize the dynamic WRSN charging utility maximization problem (termed MATA problem). Through discretizing candidate charging locations and modeling the dynamic charging process, we propose a near-optimal algorithm for maximizing charging utility. Moreover, we point out the long-short-term conflict of dynamic sensors that their location distributions in the short-term usually deviate from the long-term expectations. To tackle this issue, we further design an online learning algorithm based on the combinatorial multi-armed bandit (CMAB) model. It iteratively adjusts the charging strategy and adapts well to nodes’ short-term location deviations. Extensive experiments and simulations demonstrate that the proposed scheme can effectively charge dynamic sensors and achieve a higher charging utility compared to baseline algorithms in both long-term and short-term.
{"title":"Charging Dynamic Sensors through Online Learning","authors":"Yu Sun, Chi Lin, Wei Yang, Jiankang Ren, Lei Wang, Guowei Wu, Qiang Zhang","doi":"10.1109/INFOCOM53939.2023.10228955","DOIUrl":"https://doi.org/10.1109/INFOCOM53939.2023.10228955","url":null,"abstract":"As a novel solution for IoT applications, wireless rechargeable sensor networks (WRSNs) have achieved widespread deployment in recent years. Existing WRSN scheduling methods have focused extensively on maximizing the network charging utility in the fixed node case. However, when sensor nodes are deployed in dynamic environments (e.g., maritime environments) where sensors move randomly over time, existing approaches are likely to incur significant performance loss or even fail to execute normally. In this work, we focus on serving dynamic nodes whose locations vary randomly and formalize the dynamic WRSN charging utility maximization problem (termed MATA problem). Through discretizing candidate charging locations and modeling the dynamic charging process, we propose a near-optimal algorithm for maximizing charging utility. Moreover, we point out the long-short-term conflict of dynamic sensors that their location distributions in the short-term usually deviate from the long-term expectations. To tackle this issue, we further design an online learning algorithm based on the combinatorial multi-armed bandit (CMAB) model. It iteratively adjusts the charging strategy and adapts well to nodes’ short-term location deviations. Extensive experiments and simulations demonstrate that the proposed scheme can effectively charge dynamic sensors and achieve a higher charging utility compared to baseline algorithms in both long-term and short-term.","PeriodicalId":387707,"journal":{"name":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121798477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-17DOI: 10.1109/INFOCOM53939.2023.10228904
Minghao Ye, Junjie Zhang, Zehua Guo, H. J. Chao
Traffic Engineering (TE) has been widely used by network operators to improve network performance and provide better service quality to users. One major challenge for TE is how to generate good routing strategies adaptive to highly dynamic future traffic scenarios. Unfortunately, existing works could either experience severe performance degradation under unexpected traffic fluctuations or sacrifice performance optimality for guaranteeing the worst-case performance when traffic is relatively stable. In this paper, we propose LARRI, a learning-based TE to predict adaptive routing strategies for future unknown traffic scenarios. By learning and predicting a routing to handle an appropriate range of future possible traffic matrices, LARRI can effectively realize a trade-off between performance optimality and worst-case performance guarantee. This is done by integrating the prediction of future demand range and the imitation of optimal range routing into one step. Moreover, LARRI employs a scalable graph neural network architecture to greatly facilitate training and inference. Extensive simulation results on six real-world network topologies and traffic traces show that LARRI achieves near-optimal load balancing performance in future traffic scenarios with up to 43.3% worst-case performance improvement over state-of-the-art baselines, and also provides the lowest end-to-end delay under dynamic traffic fluctuations.
{"title":"LARRI: Learning-based Adaptive Range Routing for Highly Dynamic Traffic in WANs","authors":"Minghao Ye, Junjie Zhang, Zehua Guo, H. J. Chao","doi":"10.1109/INFOCOM53939.2023.10228904","DOIUrl":"https://doi.org/10.1109/INFOCOM53939.2023.10228904","url":null,"abstract":"Traffic Engineering (TE) has been widely used by network operators to improve network performance and provide better service quality to users. One major challenge for TE is how to generate good routing strategies adaptive to highly dynamic future traffic scenarios. Unfortunately, existing works could either experience severe performance degradation under unexpected traffic fluctuations or sacrifice performance optimality for guaranteeing the worst-case performance when traffic is relatively stable. In this paper, we propose LARRI, a learning-based TE to predict adaptive routing strategies for future unknown traffic scenarios. By learning and predicting a routing to handle an appropriate range of future possible traffic matrices, LARRI can effectively realize a trade-off between performance optimality and worst-case performance guarantee. This is done by integrating the prediction of future demand range and the imitation of optimal range routing into one step. Moreover, LARRI employs a scalable graph neural network architecture to greatly facilitate training and inference. Extensive simulation results on six real-world network topologies and traffic traces show that LARRI achieves near-optimal load balancing performance in future traffic scenarios with up to 43.3% worst-case performance improvement over state-of-the-art baselines, and also provides the lowest end-to-end delay under dynamic traffic fluctuations.","PeriodicalId":387707,"journal":{"name":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125290571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-17DOI: 10.1109/INFOCOM53939.2023.10228982
E. Wang, Weiting Liu, Wenbin Liu, Chaocan Xiang, Boai Yang, Yongjian Yang
Mobile CrowdSensing (MCS) is a data sensing paradigm that recruits users carrying mobile terminals to collect data. As its variant, Sparse MCS has been further proposed for large-scale and fine-grained sensing task with the advantage of collecting only a few data to infer unsensed data. However, in many real-world scenarios, such as early prevention of epidemic, people are interested in not only the data at the current, but also in the future or even long-term future, and the latter may be more important. Long-term prediction not only reduces sensing cost, but also identifies trends or other characteristics of the data. In this paper, we propose a spatiotemporal model based on Transformer to infer and predict the data with sparse sensed data by utilizing spatiotemporal relationships. We design a spatiotemporal feature embedding to embed the prior spatiotemporal information of sensing map into the model to guide model learning. Moreover, we also design a novel multi-head spatiotemporal attention mechanism to dynamically capture spatiotemporal relationships among data. Extensive experiments have been conducted on three types of typical urban sensing tasks, which verify the effectiveness of our proposed algorithms in improving the inference and long-term prediction accuracy with the sparse sensed data.
{"title":"Spatiotemporal Transformer for Data Inference and Long Prediction in Sparse Mobile CrowdSensing","authors":"E. Wang, Weiting Liu, Wenbin Liu, Chaocan Xiang, Boai Yang, Yongjian Yang","doi":"10.1109/INFOCOM53939.2023.10228982","DOIUrl":"https://doi.org/10.1109/INFOCOM53939.2023.10228982","url":null,"abstract":"Mobile CrowdSensing (MCS) is a data sensing paradigm that recruits users carrying mobile terminals to collect data. As its variant, Sparse MCS has been further proposed for large-scale and fine-grained sensing task with the advantage of collecting only a few data to infer unsensed data. However, in many real-world scenarios, such as early prevention of epidemic, people are interested in not only the data at the current, but also in the future or even long-term future, and the latter may be more important. Long-term prediction not only reduces sensing cost, but also identifies trends or other characteristics of the data. In this paper, we propose a spatiotemporal model based on Transformer to infer and predict the data with sparse sensed data by utilizing spatiotemporal relationships. We design a spatiotemporal feature embedding to embed the prior spatiotemporal information of sensing map into the model to guide model learning. Moreover, we also design a novel multi-head spatiotemporal attention mechanism to dynamically capture spatiotemporal relationships among data. Extensive experiments have been conducted on three types of typical urban sensing tasks, which verify the effectiveness of our proposed algorithms in improving the inference and long-term prediction accuracy with the sparse sensed data.","PeriodicalId":387707,"journal":{"name":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications","volume":"281 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127553487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Low Power Wide Area Networks (LPWANs) have been shown promising in connecting large-scale low-cost devices with low-power long-distance communication. However, existing LPWANs cannot work well for real deployments due to severe packet collisions. We propose OrthoRa, a new technology which significantly improves the concurrency for low-power long-distance LPWAN transmission. The key of OrthoRa is a novel design, Orthogonal Scatter Chirp Spreading Spectrum (OSCSS), which enables orthogonal packet transmissions while providing low SNR communication in LPWANs. Different nodes can send packets encoded with different orthogonal scatter chirps, and the receiver can decode collided packets from different nodes. We theoretically prove that OrthoRa provides very high concurrency for low SNR communication under different scenarios. For real networks, we address practical challenges of multiple-packet detection for collided packets, scatter chirp identification for decoding each packet and accurate packet synchronization with Carrier Frequency Offset. We implement OrthoRa on HackRF One and extensively evaluate its performance. The evaluation results show that OrthoRa improves the network throughput and concurrency by 50× compared with LoRa.
{"title":"Push the Limit of LPWANs with Concurrent Transmissions","authors":"Pengjin Xie, Yinghui Li, Zhenqiang Xu, Qiang Chen, Yunhao Liu, Jiliang Wang","doi":"10.1109/INFOCOM53939.2023.10228983","DOIUrl":"https://doi.org/10.1109/INFOCOM53939.2023.10228983","url":null,"abstract":"Low Power Wide Area Networks (LPWANs) have been shown promising in connecting large-scale low-cost devices with low-power long-distance communication. However, existing LPWANs cannot work well for real deployments due to severe packet collisions. We propose OrthoRa, a new technology which significantly improves the concurrency for low-power long-distance LPWAN transmission. The key of OrthoRa is a novel design, Orthogonal Scatter Chirp Spreading Spectrum (OSCSS), which enables orthogonal packet transmissions while providing low SNR communication in LPWANs. Different nodes can send packets encoded with different orthogonal scatter chirps, and the receiver can decode collided packets from different nodes. We theoretically prove that OrthoRa provides very high concurrency for low SNR communication under different scenarios. For real networks, we address practical challenges of multiple-packet detection for collided packets, scatter chirp identification for decoding each packet and accurate packet synchronization with Carrier Frequency Offset. We implement OrthoRa on HackRF One and extensively evaluate its performance. The evaluation results show that OrthoRa improves the network throughput and concurrency by 50× compared with LoRa.","PeriodicalId":387707,"journal":{"name":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122579525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-17DOI: 10.1109/INFOCOM53939.2023.10229011
Haozhao Wang, Wenchao Xu, Yunfeng Fan, Rui Li, Pan Zhou
Federated Learning enables collaboratively model training among a number of distributed devices with the coordination of a centralized server, where each device alternatively performs local gradient computation and communication to the server. FL suffers from significant performance degradation due to the excessive communication delay between the server and devices, especially when the network bandwidth of these devices is limited, which is common in edge environments. Existing methods overlap the gradient computation and communication to hide the communication latency to accelerate the FL training. However, the overlapping can also lead to an inevitable gap between the local model in each device and the global model in the server that seriously restricts the convergence rate of learning process. To address this problem, we propose a new overlapping method for FL, AOCC-FL, which aligns the local model with the global model via calibrated compensation such that the communication delay can be hidden without deteriorating the convergence performance. Theoretically, we prove that AOCC-FL admits the same convergence rate as the non-overlapping method. On both simulated and testbed experiments, we show that AOCC-FL achieves a comparable convergence rate relative to the non-overlapping method while outperforming the state-of-the-art overlapping methods.
{"title":"AOCC-FL: Federated Learning with Aligned Overlapping via Calibrated Compensation","authors":"Haozhao Wang, Wenchao Xu, Yunfeng Fan, Rui Li, Pan Zhou","doi":"10.1109/INFOCOM53939.2023.10229011","DOIUrl":"https://doi.org/10.1109/INFOCOM53939.2023.10229011","url":null,"abstract":"Federated Learning enables collaboratively model training among a number of distributed devices with the coordination of a centralized server, where each device alternatively performs local gradient computation and communication to the server. FL suffers from significant performance degradation due to the excessive communication delay between the server and devices, especially when the network bandwidth of these devices is limited, which is common in edge environments. Existing methods overlap the gradient computation and communication to hide the communication latency to accelerate the FL training. However, the overlapping can also lead to an inevitable gap between the local model in each device and the global model in the server that seriously restricts the convergence rate of learning process. To address this problem, we propose a new overlapping method for FL, AOCC-FL, which aligns the local model with the global model via calibrated compensation such that the communication delay can be hidden without deteriorating the convergence performance. Theoretically, we prove that AOCC-FL admits the same convergence rate as the non-overlapping method. On both simulated and testbed experiments, we show that AOCC-FL achieves a comparable convergence rate relative to the non-overlapping method while outperforming the state-of-the-art overlapping methods.","PeriodicalId":387707,"journal":{"name":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121244945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-17DOI: 10.1109/INFOCOM53939.2023.10228966
He Sun, Mingjun Xiao, Yin Xu, Guoju Gao, S. Zhang
As a new paradigm of data trading, Crowdsensing Data Trading (CDT) has attracted widespread attention in recent years, where data collection tasks of buyers are crowdsourced to a group of mobile users as sellers through a platform as a broker for long-term data trading. The stability of the matching between buyers and sellers in the data trading market is one of the most important CDT issues. In this paper, we focus on the privacy-preserving stable CDT issue with unknown preference sequences of buyers. Our goal is to maximize the accumulative data quality for each task while protecting the data qualities of sellers and ensuring the stability of the CDT market. We model such privacy-preserving stable CDT issue with unknown preference sequences as a differentially private competing multi-player multi-armed bandit problem. We define a novel metric δ-stability and propose a privacy-preserving stable CDT mechanism based on differential privacy, stable matching theory, and competing bandit strategy, called DPS-CB, to solve this problem. Finally, we prove the security and the stability of the CDT market under the effect of privacy concerns and analyze the regret performance of DPS-CB. Also, the performance is demonstrated on a real-world dataset.
众感数据交易(Crowdsensing data trading, CDT)作为一种新的数据交易范式,近年来受到了广泛关注,它通过一个平台作为经纪人,将买家的数据收集任务众包给一群作为卖家的移动用户,进行长期的数据交易。在数据交易市场中,买卖双方匹配的稳定性是CDT最重要的问题之一。本文主要研究具有未知购买者偏好序列的稳定CDT问题。我们的目标是在保护卖家数据质量的同时,最大限度地提高每个任务的累积数据质量,确保CDT市场的稳定性。我们将这种具有未知偏好序列的保持隐私的稳定CDT问题建模为一个差异隐私竞争的多玩家多武装强盗问题。为了解决这一问题,我们定义了一种新的度量δ-稳定性,并提出了一种基于差分隐私、稳定匹配理论和竞争盗匪策略的保护隐私的稳定CDT机制,称为DPS-CB。最后,我们证明了在隐私问题影响下CDT市场的安全性和稳定性,并分析了DPS-CB的遗憾性能。此外,还在真实数据集上演示了性能。
{"title":"Privacy-preserving Stable Crowdsensing Data Trading for Unknown Market","authors":"He Sun, Mingjun Xiao, Yin Xu, Guoju Gao, S. Zhang","doi":"10.1109/INFOCOM53939.2023.10228966","DOIUrl":"https://doi.org/10.1109/INFOCOM53939.2023.10228966","url":null,"abstract":"As a new paradigm of data trading, Crowdsensing Data Trading (CDT) has attracted widespread attention in recent years, where data collection tasks of buyers are crowdsourced to a group of mobile users as sellers through a platform as a broker for long-term data trading. The stability of the matching between buyers and sellers in the data trading market is one of the most important CDT issues. In this paper, we focus on the privacy-preserving stable CDT issue with unknown preference sequences of buyers. Our goal is to maximize the accumulative data quality for each task while protecting the data qualities of sellers and ensuring the stability of the CDT market. We model such privacy-preserving stable CDT issue with unknown preference sequences as a differentially private competing multi-player multi-armed bandit problem. We define a novel metric δ-stability and propose a privacy-preserving stable CDT mechanism based on differential privacy, stable matching theory, and competing bandit strategy, called DPS-CB, to solve this problem. Finally, we prove the security and the stability of the CDT market under the effect of privacy concerns and analyze the regret performance of DPS-CB. Also, the performance is demonstrated on a real-world dataset.","PeriodicalId":387707,"journal":{"name":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127769953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud infrastructure has gradually displayed a tendency of geographical distribution in order to provide anywhere, anytime connectivity to tenants all over the world. The tenant task placement in geo-distributed clouds comes with three critical and coupled factors: regional diversity in electricity prices, access delay for tenants, and traffic demand among tasks. However, existing works disregard either the regional difference in electricity prices or the tenant requirements in geo-distributed clouds, resulting in increased operating costs or low user QoS. To bridge the gap, we design a cost optimization framework for tenant task placement in geo-distributed clouds, called TanGo. However, it is non-trivial to achieve an optimization framework while meeting all the tenant requirements. To this end, we first formulate the electricity cost minimization for task placement problem as a constrained mixed-integer non-linear programming problem. We then propose a near-optimal algorithm with a tight approximation ratio (1 − 1/e) using an effective submodular-based method. Results of in-depth simulations based on real-world datasets show the effectiveness of our algorithm as well as the overall 10%-30% reduction in electricity expenses compared to commonly-adopted alternatives.
{"title":"TanGo: A Cost Optimization Framework for Tenant Task Placement in Geo-distributed Clouds","authors":"Luyao Luo, Gongming Zhao, Hong-Ze Xu, Zhuolong Yu, Liguang Xie","doi":"10.1109/INFOCOM53939.2023.10229004","DOIUrl":"https://doi.org/10.1109/INFOCOM53939.2023.10229004","url":null,"abstract":"Cloud infrastructure has gradually displayed a tendency of geographical distribution in order to provide anywhere, anytime connectivity to tenants all over the world. The tenant task placement in geo-distributed clouds comes with three critical and coupled factors: regional diversity in electricity prices, access delay for tenants, and traffic demand among tasks. However, existing works disregard either the regional difference in electricity prices or the tenant requirements in geo-distributed clouds, resulting in increased operating costs or low user QoS. To bridge the gap, we design a cost optimization framework for tenant task placement in geo-distributed clouds, called TanGo. However, it is non-trivial to achieve an optimization framework while meeting all the tenant requirements. To this end, we first formulate the electricity cost minimization for task placement problem as a constrained mixed-integer non-linear programming problem. We then propose a near-optimal algorithm with a tight approximation ratio (1 − 1/e) using an effective submodular-based method. Results of in-depth simulations based on real-world datasets show the effectiveness of our algorithm as well as the overall 10%-30% reduction in electricity expenses compared to commonly-adopted alternatives.","PeriodicalId":387707,"journal":{"name":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131822489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}