Pub Date : 2020-12-01DOI: 10.1109/GLOBECOM42002.2020.9322139
Sihua Wang, Mingzhe Chen, W. Saad, Changchuan Yin, Shuguang Cui, H. Poor
In this paper, the problem of minimizing the weighted sum of age of information (AoI) and total energy consumption of Internet of Things (IoT) devices is studied. In particular, each IoT device monitors a physical process that follows nonlinear dynamics. As the dynamic of the physical process varies over time, each device must sample the real-time status of the physical system and send the status information to a base station (BS) so as to monitor the physical process. The dynamics of the realistic physical process will influence the sampling frequency and status update scheme of each device. In particular, as the physical process varies rapidly, the sampling frequency of each device must be increased to capture these physical dynamics. Meanwhile, changes in the sampling frequency will also impact the energy usage of the device. Thus, it is necessary to determine a subset of devices to sample the physical process at each time slot so as to accurately monitor the dynamics of the physical process using minimum energy. This problem is formulated as an optimization problem whose goal is to minimize the weighted sum of AoI and total device energy consumption. To solve this problem, a machine learning framework based on the repeated update Q-learning (RUQL) algorithm is proposed. The proposed method enables the BS to overcome the biased action selection problem (e.g., an agent always takes a subset of actions while ignoring other actions), and hence, dynamically and quickly finding a device sampling and status update policy so as to minimize the sum of AoI and energy consumption of all devices. Simulations with real data of PM 2.5 pollution in Beijing from the Center for Statistical Science at Peking University show that the proposed algorithm can reduce the sum of AoI by up to 26.9% compared to the conventional Q-learning method.
{"title":"Reinforcement Learning for Minimizing Age of Information under Realistic Physical Dynamics","authors":"Sihua Wang, Mingzhe Chen, W. Saad, Changchuan Yin, Shuguang Cui, H. Poor","doi":"10.1109/GLOBECOM42002.2020.9322139","DOIUrl":"https://doi.org/10.1109/GLOBECOM42002.2020.9322139","url":null,"abstract":"In this paper, the problem of minimizing the weighted sum of age of information (AoI) and total energy consumption of Internet of Things (IoT) devices is studied. In particular, each IoT device monitors a physical process that follows nonlinear dynamics. As the dynamic of the physical process varies over time, each device must sample the real-time status of the physical system and send the status information to a base station (BS) so as to monitor the physical process. The dynamics of the realistic physical process will influence the sampling frequency and status update scheme of each device. In particular, as the physical process varies rapidly, the sampling frequency of each device must be increased to capture these physical dynamics. Meanwhile, changes in the sampling frequency will also impact the energy usage of the device. Thus, it is necessary to determine a subset of devices to sample the physical process at each time slot so as to accurately monitor the dynamics of the physical process using minimum energy. This problem is formulated as an optimization problem whose goal is to minimize the weighted sum of AoI and total device energy consumption. To solve this problem, a machine learning framework based on the repeated update Q-learning (RUQL) algorithm is proposed. The proposed method enables the BS to overcome the biased action selection problem (e.g., an agent always takes a subset of actions while ignoring other actions), and hence, dynamically and quickly finding a device sampling and status update policy so as to minimize the sum of AoI and energy consumption of all devices. Simulations with real data of PM 2.5 pollution in Beijing from the Center for Statistical Science at Peking University show that the proposed algorithm can reduce the sum of AoI by up to 26.9% compared to the conventional Q-learning method.","PeriodicalId":12759,"journal":{"name":"GLOBECOM 2020 - 2020 IEEE Global Communications Conference","volume":"174 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73299724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/GLOBECOM42002.2020.9322496
Ahmad Terra, R. Inam, Sandhya Baskaran, Pedro Batista, Ian Burdick, E. Fersman
Artificial Intelligence (AI) is implemented in various applications of telecommunication domain, ranging from managing the network, controlling a specific hardware function, preventing a failure, or troubleshooting a problem till automating the network slice management in 5G. The greater levels of autonomy increase the need for explainability of the decisions made by AI so that humans can understand them (e.g. the underlying data evidence and causal reasoning) consequently enabling trust. This paper presents first, the application of multiple global and local explainability methods with the main purpose to analyze the root-cause of Service Level Agreement violation prediction in a 5G network slicing setup by identifying important features contributing to the decision. Second, it performs a comparative analysis of the applied methods to analyze explainability of the predicted violation. Further, the global explainability results are validated using statistical Causal Dataframe method in order to improve the identified cause of the problem and thus validating the explainability.
{"title":"Explainability Methods for Identifying Root-Cause of SLA Violation Prediction in 5G Network","authors":"Ahmad Terra, R. Inam, Sandhya Baskaran, Pedro Batista, Ian Burdick, E. Fersman","doi":"10.1109/GLOBECOM42002.2020.9322496","DOIUrl":"https://doi.org/10.1109/GLOBECOM42002.2020.9322496","url":null,"abstract":"Artificial Intelligence (AI) is implemented in various applications of telecommunication domain, ranging from managing the network, controlling a specific hardware function, preventing a failure, or troubleshooting a problem till automating the network slice management in 5G. The greater levels of autonomy increase the need for explainability of the decisions made by AI so that humans can understand them (e.g. the underlying data evidence and causal reasoning) consequently enabling trust. This paper presents first, the application of multiple global and local explainability methods with the main purpose to analyze the root-cause of Service Level Agreement violation prediction in a 5G network slicing setup by identifying important features contributing to the decision. Second, it performs a comparative analysis of the applied methods to analyze explainability of the predicted violation. Further, the global explainability results are validated using statistical Causal Dataframe method in order to improve the identified cause of the problem and thus validating the explainability.","PeriodicalId":12759,"journal":{"name":"GLOBECOM 2020 - 2020 IEEE Global Communications Conference","volume":"10 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79891835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/GLOBECOM42002.2020.9322202
Tobias Schaich, Karthik Subramaniam, E. Acedo, A. A. Rawi
This paper re-examines the 100 Ohm characteristic impedance assumption used in the context of Digital Subscriber Line in the very to ultra high frequency range. A novel method of measuring the characteristic impedance of a transmission line is introduced which uses time-gated measurements from a vector network analyser. The results are utilised to synthesise impedance matching circuits for the excitation of a single twisted pair and a two pair underground cable at centre frequencies of 150 MHz and 500 MHz. These circuits achieved reflections as low as $Omega$40 dB. Furthermore, we report gains in transmission increasing with frequency, ranging from few decibels below 100 MHz, to several in 100-250 MHz and many above 400 MHz when compared to a commercial reference balun. In fact, compatibility tests beyond the target cable samples demonstrated higher than 10 dB gains in 20, 40 and 60 metre underground cables at around 500 MHz indicating that sample-based matching circuits are not entirely sample-specific. Preliminary tests showed no significant change in crosstalk when using differential matching circuits.
{"title":"High Frequency Impedance Matching for Twisted Pair Cables","authors":"Tobias Schaich, Karthik Subramaniam, E. Acedo, A. A. Rawi","doi":"10.1109/GLOBECOM42002.2020.9322202","DOIUrl":"https://doi.org/10.1109/GLOBECOM42002.2020.9322202","url":null,"abstract":"This paper re-examines the 100 Ohm characteristic impedance assumption used in the context of Digital Subscriber Line in the very to ultra high frequency range. A novel method of measuring the characteristic impedance of a transmission line is introduced which uses time-gated measurements from a vector network analyser. The results are utilised to synthesise impedance matching circuits for the excitation of a single twisted pair and a two pair underground cable at centre frequencies of 150 MHz and 500 MHz. These circuits achieved reflections as low as $Omega$40 dB. Furthermore, we report gains in transmission increasing with frequency, ranging from few decibels below 100 MHz, to several in 100-250 MHz and many above 400 MHz when compared to a commercial reference balun. In fact, compatibility tests beyond the target cable samples demonstrated higher than 10 dB gains in 20, 40 and 60 metre underground cables at around 500 MHz indicating that sample-based matching circuits are not entirely sample-specific. Preliminary tests showed no significant change in crosstalk when using differential matching circuits.","PeriodicalId":12759,"journal":{"name":"GLOBECOM 2020 - 2020 IEEE Global Communications Conference","volume":"40 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77121028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/GLOBECOM42002.2020.9348244
Tian Zixu, Kushan Sudheera Kalupahana Liyanage, G. Mohan
With the advances in modern communication technologies, the application scale of Internet of Things (IoT) has evolved at an unprecedented level, which on the other hand poses threats to the IoT ecosystem. As the intrusions and malicious actions are becoming more complex and unpredictable, developing an effective anomaly detection system, considering the distributed nature of IoT networks, remains a challenge. Moreover, the lack of sufficiently large amount of data samples of IoT traffic and data privacy pose further challenges in developing a behavior-based anomaly detection system. To address these issues, we present an unsupervised hierarchical approach for anomaly detection through cooperation between generative adversarial network (GAN) and auto-encoder (AE). The problems of data aggregation and privacy preservation are addressed by reconstructing a sampling pool at a centralized controller using a collection of generators from the individual IoT networks. Then, a centralized global AE is trained and passed to individual local networks for anomaly detection after a final adaptation with the local raw data from the IoT nodes. The performance is evaluated using the UNSW Bot-IoT dataset and the results demonstrate the effectiveness of our proposed approach which outperforms other approaches.
{"title":"Generative Adversarial Network and Auto Encoder based Anomaly Detection in Distributed IoT Networks","authors":"Tian Zixu, Kushan Sudheera Kalupahana Liyanage, G. Mohan","doi":"10.1109/GLOBECOM42002.2020.9348244","DOIUrl":"https://doi.org/10.1109/GLOBECOM42002.2020.9348244","url":null,"abstract":"With the advances in modern communication technologies, the application scale of Internet of Things (IoT) has evolved at an unprecedented level, which on the other hand poses threats to the IoT ecosystem. As the intrusions and malicious actions are becoming more complex and unpredictable, developing an effective anomaly detection system, considering the distributed nature of IoT networks, remains a challenge. Moreover, the lack of sufficiently large amount of data samples of IoT traffic and data privacy pose further challenges in developing a behavior-based anomaly detection system. To address these issues, we present an unsupervised hierarchical approach for anomaly detection through cooperation between generative adversarial network (GAN) and auto-encoder (AE). The problems of data aggregation and privacy preservation are addressed by reconstructing a sampling pool at a centralized controller using a collection of generators from the individual IoT networks. Then, a centralized global AE is trained and passed to individual local networks for anomaly detection after a final adaptation with the local raw data from the IoT nodes. The performance is evaluated using the UNSW Bot-IoT dataset and the results demonstrate the effectiveness of our proposed approach which outperforms other approaches.","PeriodicalId":12759,"journal":{"name":"GLOBECOM 2020 - 2020 IEEE Global Communications Conference","volume":"126 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82240590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/GLOBECOM42002.2020.9348134
Deemah H. Tashman, W. Hamouda
In this paper, physical-layer security (PLS) for an underlay cognitive radio network (CRN) over cascaded Rayleigh fading channels is studied. The underlying cognitive radio system consists of a secondary source transmitting to a destination over a cascaded Rayleigh fading channel. An eavesdropper is attempting to intercept the confidential information of the secondary users (SUs) pair. The secrecy is studied in terms of three main security metrics, which are the secrecy outage probability (SOP), the probability of non-zero secrecy capacity (Prnzc), and the intercept probability (Pint). The effects of the path loss and the variation of the distances from the SU transmitter over the secrecy are also analyzed. Results reveal the great effect of the cascade level over the system secrecy. In addition, the effect of varying the interference threshold that the PU receiver can tolerate over the secrecy of the SUs pair is studied. The effect of the channel model parameters of both the main and the wiretap channels is investigated using both simulation and analytical results.
{"title":"Physical-Layer Security for Cognitive Radio Networks over Cascaded Rayleigh Fading Channels","authors":"Deemah H. Tashman, W. Hamouda","doi":"10.1109/GLOBECOM42002.2020.9348134","DOIUrl":"https://doi.org/10.1109/GLOBECOM42002.2020.9348134","url":null,"abstract":"In this paper, physical-layer security (PLS) for an underlay cognitive radio network (CRN) over cascaded Rayleigh fading channels is studied. The underlying cognitive radio system consists of a secondary source transmitting to a destination over a cascaded Rayleigh fading channel. An eavesdropper is attempting to intercept the confidential information of the secondary users (SUs) pair. The secrecy is studied in terms of three main security metrics, which are the secrecy outage probability (SOP), the probability of non-zero secrecy capacity (Prnzc), and the intercept probability (Pint). The effects of the path loss and the variation of the distances from the SU transmitter over the secrecy are also analyzed. Results reveal the great effect of the cascade level over the system secrecy. In addition, the effect of varying the interference threshold that the PU receiver can tolerate over the secrecy of the SUs pair is studied. The effect of the channel model parameters of both the main and the wiretap channels is investigated using both simulation and analytical results.","PeriodicalId":12759,"journal":{"name":"GLOBECOM 2020 - 2020 IEEE Global Communications Conference","volume":"56 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82273628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/GLOBECOM42002.2020.9322124
Fan Zhou, Xin Jing, Xovee Xu, Ting Zhong, Goce Trajcevski, Jin Wu
Modeling the information diffusion process is an essential step towards understanding the mechanisms driving the success of information. Existing methods either exploit various features associated with cascades to study the underlying factors governing information propagation, or leverage graph representation techniques to model the diffusion process in an end-to-end manner. Current solutions are only valid for a static and fixed observation scenario and fail to handle increasing observations due to the challenge of catastrophic forgetting problems inherent in the machine learning approaches used for modeling and predicting cascades. To remedy this issue, we propose a novel dynamic information diffusion model CICP (Continual Information Cascades Prediction). CICP employs graph neural networks for modeling information diffusion and continually adapts to increasing observations. It is capable of capturing the correlations between successive observations while preserving the important parameters regarding cascade evolution and transition. Experiments conducted on real-world cascade datasets demonstrate that our method not only improves the prediction performance with accumulated data but also prevents the model from forgetting previously trained tasks.
{"title":"Continual Information Cascade Learning","authors":"Fan Zhou, Xin Jing, Xovee Xu, Ting Zhong, Goce Trajcevski, Jin Wu","doi":"10.1109/GLOBECOM42002.2020.9322124","DOIUrl":"https://doi.org/10.1109/GLOBECOM42002.2020.9322124","url":null,"abstract":"Modeling the information diffusion process is an essential step towards understanding the mechanisms driving the success of information. Existing methods either exploit various features associated with cascades to study the underlying factors governing information propagation, or leverage graph representation techniques to model the diffusion process in an end-to-end manner. Current solutions are only valid for a static and fixed observation scenario and fail to handle increasing observations due to the challenge of catastrophic forgetting problems inherent in the machine learning approaches used for modeling and predicting cascades. To remedy this issue, we propose a novel dynamic information diffusion model CICP (Continual Information Cascades Prediction). CICP employs graph neural networks for modeling information diffusion and continually adapts to increasing observations. It is capable of capturing the correlations between successive observations while preserving the important parameters regarding cascade evolution and transition. Experiments conducted on real-world cascade datasets demonstrate that our method not only improves the prediction performance with accumulated data but also prevents the model from forgetting previously trained tasks.","PeriodicalId":12759,"journal":{"name":"GLOBECOM 2020 - 2020 IEEE Global Communications Conference","volume":"126 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82288239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/GLOBECOM42002.2020.9347993
Dan-dan Yang, Jiang Liu, Ran Zhang, Tao Huang
Satellite network constellation is promising in providing efficient global Internet access. While the constellation scale, the user population, and service variety in satellite networks are too large, requiring efficient resource allocation and network management. Network virtualization is an efficient solution to achieve preceding objectives, but conventional schemes on terrestrial networks are not well adapted to satellite networks. Therefore, in this work, we establish a network virtualization model considering topology dynamics, quality of service requirement, and resource constraint. Then we formulate Virtual Network Embedding (VNE) into optimization problems, and we propose a multi-constraint virtual network embedding algorithm to solve the problem. Finally, we evaluate the proposed scheme and prove its adaptability to satellite networks.
{"title":"Multi-Constraint Virtual Network Embedding Algorithm For Satellite Networks","authors":"Dan-dan Yang, Jiang Liu, Ran Zhang, Tao Huang","doi":"10.1109/GLOBECOM42002.2020.9347993","DOIUrl":"https://doi.org/10.1109/GLOBECOM42002.2020.9347993","url":null,"abstract":"Satellite network constellation is promising in providing efficient global Internet access. While the constellation scale, the user population, and service variety in satellite networks are too large, requiring efficient resource allocation and network management. Network virtualization is an efficient solution to achieve preceding objectives, but conventional schemes on terrestrial networks are not well adapted to satellite networks. Therefore, in this work, we establish a network virtualization model considering topology dynamics, quality of service requirement, and resource constraint. Then we formulate Virtual Network Embedding (VNE) into optimization problems, and we propose a multi-constraint virtual network embedding algorithm to solve the problem. Finally, we evaluate the proposed scheme and prove its adaptability to satellite networks.","PeriodicalId":12759,"journal":{"name":"GLOBECOM 2020 - 2020 IEEE Global Communications Conference","volume":"14 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82090901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/GLOBECOM42002.2020.9348052
Lu Yu, Y. Leung, X. Chu, J. Ng
Fingerprint is one of the representative methods for wireless indoor localization. It uses a fingerprint database (measured in the offline phase) and the current received signal strengths (RSSs) (measured by the user's device in the online phase) to determine the location of this device. However, the RSSs and hence the localization accuracy would be affected by time-varying environmental factors (e.g., number of people in a shopping mall). In this paper, we propose a new method for wireless localization in time-varying indoor environments. In the offline phase, the proposed method measures extra information: it measures $E$ fingerprint databases for $E$ respective environmental conditions, where $E$ is a design parameter (e.g., $E=2$ for the peak period and the non-peak period in a shopping mall). In the online phase, it leverages the extra information for better localization in time-varying indoor environment, even when the current environmental condition is different from the ones considered in the offline phase. The proposed method is particularly suitable for the indoor venues for which their primary concern is to provide good quality localization services while they could afford a moderate amount of extra resources for one-off measurement in the offline phase (e.g., exhibition centers, airports, shopping malls, etc.). We conduct a simulation experiment and a real-world experiment to demonstrate that the proposed method gives accurate localization.
{"title":"Multi-Fingerprint for Wireless Localization in Time-Varying Indoor Environment","authors":"Lu Yu, Y. Leung, X. Chu, J. Ng","doi":"10.1109/GLOBECOM42002.2020.9348052","DOIUrl":"https://doi.org/10.1109/GLOBECOM42002.2020.9348052","url":null,"abstract":"Fingerprint is one of the representative methods for wireless indoor localization. It uses a fingerprint database (measured in the offline phase) and the current received signal strengths (RSSs) (measured by the user's device in the online phase) to determine the location of this device. However, the RSSs and hence the localization accuracy would be affected by time-varying environmental factors (e.g., number of people in a shopping mall). In this paper, we propose a new method for wireless localization in time-varying indoor environments. In the offline phase, the proposed method measures extra information: it measures $E$ fingerprint databases for $E$ respective environmental conditions, where $E$ is a design parameter (e.g., $E=2$ for the peak period and the non-peak period in a shopping mall). In the online phase, it leverages the extra information for better localization in time-varying indoor environment, even when the current environmental condition is different from the ones considered in the offline phase. The proposed method is particularly suitable for the indoor venues for which their primary concern is to provide good quality localization services while they could afford a moderate amount of extra resources for one-off measurement in the offline phase (e.g., exhibition centers, airports, shopping malls, etc.). We conduct a simulation experiment and a real-world experiment to demonstrate that the proposed method gives accurate localization.","PeriodicalId":12759,"journal":{"name":"GLOBECOM 2020 - 2020 IEEE Global Communications Conference","volume":"101 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76101704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/GLOBECOM42002.2020.9322226
Kai Wang, Xiao Zhang, Lingjie Duan
Unmanned Aerial Vehicle (UAV) technology is a promising solution for providing high-quality mobile services (e.g., edge computing, fast Internet connection, and local caching) to ground users, where a UAV with limited service coverage travels among multiple geographical user locations (e.g., hotspots) for servicing demands locally. It is necessary for different UAVs to cooperate with each other for servicing many users, and how to determine their cooperative path planning to best meet many users’ spatio-temporally distributed demands is an important question. This paper is the first to design and analyze cooperative path-planning algorithms of a UAV swarm for optimally servicing many spatial locations with dynamic user arrivals and waiting deadlines in the time horizon. For each UAV, it needs to decide whether to wait at the current location or chase a newly released demand in another location, under upper coordination with the other UAVs in the swarm. For each UAV’s routing problem even without coordinating with the rest UAVs, it follows dynamic programming structure and is difficult to solve directly given many user demands. We manage to simplify and propose an optimal algorithm of fast computation time (only polynomial with respect to both the numbers of user locations and user demands) for returning the UAV’s optimal path-planning. When a large number $|K|$ of UAVs are coordinating, the dynamic programming simplification becomes intractable. Alternatively, we present an iterative cooperation algorithm with approximation ratio 1 - $left(1 - frac{1}{|k|}right)^{|K|}$) in the worst case, which is proved to obviously outperform the traditional idea of partitioning UAVs to serve different user/location clusters separately. Finally, we conduct simulation experiments to show that our algorithm’s average performance is close to the optimum.
{"title":"Cooperative path planning of a UAV swarm to meet temporal-spatial user demands","authors":"Kai Wang, Xiao Zhang, Lingjie Duan","doi":"10.1109/GLOBECOM42002.2020.9322226","DOIUrl":"https://doi.org/10.1109/GLOBECOM42002.2020.9322226","url":null,"abstract":"Unmanned Aerial Vehicle (UAV) technology is a promising solution for providing high-quality mobile services (e.g., edge computing, fast Internet connection, and local caching) to ground users, where a UAV with limited service coverage travels among multiple geographical user locations (e.g., hotspots) for servicing demands locally. It is necessary for different UAVs to cooperate with each other for servicing many users, and how to determine their cooperative path planning to best meet many users’ spatio-temporally distributed demands is an important question. This paper is the first to design and analyze cooperative path-planning algorithms of a UAV swarm for optimally servicing many spatial locations with dynamic user arrivals and waiting deadlines in the time horizon. For each UAV, it needs to decide whether to wait at the current location or chase a newly released demand in another location, under upper coordination with the other UAVs in the swarm. For each UAV’s routing problem even without coordinating with the rest UAVs, it follows dynamic programming structure and is difficult to solve directly given many user demands. We manage to simplify and propose an optimal algorithm of fast computation time (only polynomial with respect to both the numbers of user locations and user demands) for returning the UAV’s optimal path-planning. When a large number $|K|$ of UAVs are coordinating, the dynamic programming simplification becomes intractable. Alternatively, we present an iterative cooperation algorithm with approximation ratio 1 - $left(1 - frac{1}{|k|}right)^{|K|}$) in the worst case, which is proved to obviously outperform the traditional idea of partitioning UAVs to serve different user/location clusters separately. Finally, we conduct simulation experiments to show that our algorithm’s average performance is close to the optimum.","PeriodicalId":12759,"journal":{"name":"GLOBECOM 2020 - 2020 IEEE Global Communications Conference","volume":"35 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76189293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/GLOBECOM42002.2020.9348082
Shiyu Zhai, Guobing Li, Zefeng Qi, Guomei Zhang
In this paper the massive access problem in IoT networks is studied from the perspective of graph signal processing (GSP). First, we reveal the connections of massive access in IoT networks and the sampling of a graph signal, and model the massive access problem as a graph-based random sampling problem. Second, inspired by the restricted isometry property (RIP) condition in compressed sensing, we derive the RIP condition for random sampling on band-limited graph signals, showing at the first time that band-limited graph signals can be recovered from randomly-selected noisy samples in a given probability. Based on the proposed RIP condition, the sampling probability of each sensing device is optimized through minimizing the Chebyshev or Gaussian approximations of mean square error between the original and the recovered signals. Experiments on the Bunny and Community graphs verify the stability of random sampling, and show the performance gain of the proposed random sampling solutions.
{"title":"Graph-Based Random Sampling for Massive Access in IoT Networks","authors":"Shiyu Zhai, Guobing Li, Zefeng Qi, Guomei Zhang","doi":"10.1109/GLOBECOM42002.2020.9348082","DOIUrl":"https://doi.org/10.1109/GLOBECOM42002.2020.9348082","url":null,"abstract":"In this paper the massive access problem in IoT networks is studied from the perspective of graph signal processing (GSP). First, we reveal the connections of massive access in IoT networks and the sampling of a graph signal, and model the massive access problem as a graph-based random sampling problem. Second, inspired by the restricted isometry property (RIP) condition in compressed sensing, we derive the RIP condition for random sampling on band-limited graph signals, showing at the first time that band-limited graph signals can be recovered from randomly-selected noisy samples in a given probability. Based on the proposed RIP condition, the sampling probability of each sensing device is optimized through minimizing the Chebyshev or Gaussian approximations of mean square error between the original and the recovered signals. Experiments on the Bunny and Community graphs verify the stability of random sampling, and show the performance gain of the proposed random sampling solutions.","PeriodicalId":12759,"journal":{"name":"GLOBECOM 2020 - 2020 IEEE Global Communications Conference","volume":"101 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87489536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}