Pub Date : 2020-11-01DOI: 10.1109/SEC50012.2020.00055
Anqi Shi, Zheng Qu, Qingyao Jia, Chen Lyu
The new coronavirus epidemic (COVID-19) has received widespread attention, causing the health crisis across the world. Massive information about the COVID-19 has emerged on social networks. However, not all information disseminated on social networks is true and reliable. In response to the COVID-19 pandemic, only real information is valuable to the authorities and the public. Therefore, it is an essential task to detect rumors of the COVID-19 on social networks. In this paper, we attempt to solve this problem by using an approach of machine learning on the platform of Weibo. First, we extract text characteristics, user-related features, interaction-based features, and emotion-based features from the spread messages of the COVID-19. Second, by combining these four types of features, we design an intelligent rumor detection model with the technique of ensemble learning. Finally, we conduct extensive experiments on the collected data from Weibo. Experimental results indicate that our model can significantly improve the accuracy of rumor detection, with an accuracy rate of 91% and an AUC value of 0.96.
{"title":"Rumor Detection of COVID-19 Pandemic on Online Social Networks","authors":"Anqi Shi, Zheng Qu, Qingyao Jia, Chen Lyu","doi":"10.1109/SEC50012.2020.00055","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00055","url":null,"abstract":"The new coronavirus epidemic (COVID-19) has received widespread attention, causing the health crisis across the world. Massive information about the COVID-19 has emerged on social networks. However, not all information disseminated on social networks is true and reliable. In response to the COVID-19 pandemic, only real information is valuable to the authorities and the public. Therefore, it is an essential task to detect rumors of the COVID-19 on social networks. In this paper, we attempt to solve this problem by using an approach of machine learning on the platform of Weibo. First, we extract text characteristics, user-related features, interaction-based features, and emotion-based features from the spread messages of the COVID-19. Second, by combining these four types of features, we design an intelligent rumor detection model with the technique of ensemble learning. Finally, we conduct extensive experiments on the collected data from Weibo. Experimental results indicate that our model can significantly improve the accuracy of rumor detection, with an accuracy rate of 91% and an AUC value of 0.96.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125012820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/SEC50012.2020.00035
Gareth George, F. Bakir, R. Wolski, C. Krintz
Internet of Things (IoT) devices are becoming increasingly prevalent in our environment, yet the process of programming these devices and processing the data they produce remains difficult. Typically, data is processed on device, involving arduous work in low level languages, or data is moved to the cloud, where abundant resources are available for Functions as a Service (FaaS) or other handlers. FaaS is an emerging category of flexible computing services, where developers deploy self-contained functions to be run in portable and secure containerized environments; however, at the moment, these functions are limited to running in the cloud or in some cases at the “edge” of the network using resource rich, Linux-based systems.In this paper, we present NanoLambda, a portable platform that brings FaaS, high-level language programming, and familiar cloud service APIs to non-Linux and microcontroller-based IoT devices. To enable this, NanoLambda couples a new, minimal Python runtime system that we have designed for the least capable end of the IoT device spectrum, with API compatibility for AWS Lambda and S3. NanoLambda transfers functions between IoT devices (sensors, edge, cloud), providing power and latency savings while retaining the programmer productivity benefits of high-level languages and FaaS. A key feature of NanoLambda is a scheduler that intelligently places function executions across multi-scale IoT deployments according to resource availability and power constraints. We evaluate a range of applications that use NanoLambda to run on devices as small as the ESP8266 with 64KB of ram and 512KB flash storage.
{"title":"NanoLambda: Implementing Functions as a Service at All Resource Scales for the Internet of Things.","authors":"Gareth George, F. Bakir, R. Wolski, C. Krintz","doi":"10.1109/SEC50012.2020.00035","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00035","url":null,"abstract":"Internet of Things (IoT) devices are becoming increasingly prevalent in our environment, yet the process of programming these devices and processing the data they produce remains difficult. Typically, data is processed on device, involving arduous work in low level languages, or data is moved to the cloud, where abundant resources are available for Functions as a Service (FaaS) or other handlers. FaaS is an emerging category of flexible computing services, where developers deploy self-contained functions to be run in portable and secure containerized environments; however, at the moment, these functions are limited to running in the cloud or in some cases at the “edge” of the network using resource rich, Linux-based systems.In this paper, we present NanoLambda, a portable platform that brings FaaS, high-level language programming, and familiar cloud service APIs to non-Linux and microcontroller-based IoT devices. To enable this, NanoLambda couples a new, minimal Python runtime system that we have designed for the least capable end of the IoT device spectrum, with API compatibility for AWS Lambda and S3. NanoLambda transfers functions between IoT devices (sensors, edge, cloud), providing power and latency savings while retaining the programmer productivity benefits of high-level languages and FaaS. A key feature of NanoLambda is a scheduler that intelligently places function executions across multi-scale IoT deployments according to resource availability and power constraints. We evaluate a range of applications that use NanoLambda to run on devices as small as the ESP8266 with 64KB of ram and 512KB flash storage.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130080464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/SEC50012.2020.00036
M. Uddin, S. Mukherjee, M. Kodialam, T. Lakshman
Mobile Augmented Reality (MAR) is going to play an important role in industrial automation. In order to tag a physical object in the MAR world, a smart phone running MAR-based applications must know the precise location of an object in the real world. Tracking and localizing a large number of objects in an industrial environment can become a huge burden for the smart phone due to compute and battery requirements. In this paper we propose GLAMAR, a novel framework that leverages externally provided geo-location of the objects and IMU sensor information (both of which can be noisy) from the objects to 10-cate them precisely in the MAR world. GLAMAR offloads heavy-duty computation to the edge and supports building MAR-based applications using commercial development packages. We develop a regenerative particle filter and a continuously improving transformation matrix computation methodology to dramatically improve the positional accuracy of objects in the real and the AR world. Our prototype implementation on Android platform using ARCore shows the practicality of GLAMAR in developing MAR-based applications with high precision, efficiency, and more realistic experience. GLAMAR is able to achieve less then 10cm error compared to the ground truth for both stationary and moving objects and reduces the CPU overhead by 83% and battery consumption by 80% for mobile devices.
{"title":"GLAMAR: Geo-Location Assisted Mobile Augmented Reality for Industrial Automation","authors":"M. Uddin, S. Mukherjee, M. Kodialam, T. Lakshman","doi":"10.1109/SEC50012.2020.00036","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00036","url":null,"abstract":"Mobile Augmented Reality (MAR) is going to play an important role in industrial automation. In order to tag a physical object in the MAR world, a smart phone running MAR-based applications must know the precise location of an object in the real world. Tracking and localizing a large number of objects in an industrial environment can become a huge burden for the smart phone due to compute and battery requirements. In this paper we propose GLAMAR, a novel framework that leverages externally provided geo-location of the objects and IMU sensor information (both of which can be noisy) from the objects to 10-cate them precisely in the MAR world. GLAMAR offloads heavy-duty computation to the edge and supports building MAR-based applications using commercial development packages. We develop a regenerative particle filter and a continuously improving transformation matrix computation methodology to dramatically improve the positional accuracy of objects in the real and the AR world. Our prototype implementation on Android platform using ARCore shows the practicality of GLAMAR in developing MAR-based applications with high precision, efficiency, and more realistic experience. GLAMAR is able to achieve less then 10cm error compared to the ground truth for both stationary and moving objects and reduces the CPU overhead by 83% and battery consumption by 80% for mobile devices.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121104074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/SEC50012.2020.00060
Cong Shen, Shengbo Chen
Quantization of local model updates before uploading to the parameter server is a primary solution to reduce the communication overhead in federated learning. However, prior literature always assumes homogeneous quantization for all clients, while in reality devices are heterogeneous and they support different levels of quantization precision. This heterogeneity of quantization poses a new challenge: fine-quantized model updates are more accurate than coarse-quantized ones, and how to optimally aggregate them at the server is an unsolved problem. In this paper, we propose FEDHQ: Federated Learning with Heterogeneous Quantization. In particular, FEDHQ allocates different weights to clients by minimizing the convergence rate upper bound, which is a function of quantization errors of all clients. We derive the convergence rate of FEDHQ under strongly convex loss functions. To further accelerate the convergence, the instantaneous quantization error is computed and piggybacked when each client uploads the local model update, and the server dynamically calculates the weight accordingly for the current round. Numerical experiments demonstrate the performance advantages of FEDHQ+ over conventional FEDAVG with standard equal weights and a heuristic scheme which assigns weights linearly proportional to the clients’ quantization precision.
{"title":"Federated Learning with Heterogeneous Quantization","authors":"Cong Shen, Shengbo Chen","doi":"10.1109/SEC50012.2020.00060","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00060","url":null,"abstract":"Quantization of local model updates before uploading to the parameter server is a primary solution to reduce the communication overhead in federated learning. However, prior literature always assumes homogeneous quantization for all clients, while in reality devices are heterogeneous and they support different levels of quantization precision. This heterogeneity of quantization poses a new challenge: fine-quantized model updates are more accurate than coarse-quantized ones, and how to optimally aggregate them at the server is an unsolved problem. In this paper, we propose FEDHQ: Federated Learning with Heterogeneous Quantization. In particular, FEDHQ allocates different weights to clients by minimizing the convergence rate upper bound, which is a function of quantization errors of all clients. We derive the convergence rate of FEDHQ under strongly convex loss functions. To further accelerate the convergence, the instantaneous quantization error is computed and piggybacked when each client uploads the local model update, and the server dynamically calculates the weight accordingly for the current round. Numerical experiments demonstrate the performance advantages of FEDHQ+ over conventional FEDAVG with standard equal weights and a heuristic scheme which assigns weights linearly proportional to the clients’ quantization precision.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121781804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/SEC50012.2020.00057
Winson Ye, Qun Li
The rise of personal assistants serves as a testament to the growing popularity of chatbots. However, as the field advances, it is important for the conversational AI community to keep in mind any potential vulnerabilities in existing architectures and how attackers could take advantantage of them. Towards this end, we present a survey of existing dialogue system vulnerabilities in security and privacy. We define chatbot security and give some background regarding the state of the art in the field. This analysis features a comprehensive description of potential attacks of each module in a typical chatbot architecture: the client module, communication module, response generation module, and database module.
{"title":"Chatbot Security and Privacy in the Age of Personal Assistants","authors":"Winson Ye, Qun Li","doi":"10.1109/SEC50012.2020.00057","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00057","url":null,"abstract":"The rise of personal assistants serves as a testament to the growing popularity of chatbots. However, as the field advances, it is important for the conversational AI community to keep in mind any potential vulnerabilities in existing architectures and how attackers could take advantantage of them. Towards this end, we present a survey of existing dialogue system vulnerabilities in security and privacy. We define chatbot security and give some background regarding the state of the art in the field. This analysis features a comprehensive description of potential attacks of each module in a typical chatbot architecture: the client module, communication module, response generation module, and database module.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123101967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/SEC50012.2020.00063
Qun Wang, Han Hu, Haijian Sun, R. Hu
Energy efficiency and security are two critical issues for mobile edge computing (MEC) networks. With stochastic task arrivals, time-varying dynamic environment, and passive existing attackers, it is very challenging to offload computation tasks securely and efficiently. In this paper, we study the task offloading and resource allocation problem in a non-orthogonal multiple access (NOMA) assisted MEC network with security and energy efficiency considerations. To tackle the problem, a dynamic secure task offloading and resource allocation algorithm is proposed based on Lyapunov optimization theory. A stochastic non-convex problem is formulated to jointly optimize the local-CPU frequency and transmit power, aiming at maximizing the network energy efficiency, which is defined as the ratio of the long-term average secure rate to the long-term average power consumption of all users. The formulated problem is decomposed into the deterministic sub-problems in each time slot. The optimal local CPU-cycle and the transmit power of each user can be given in the closed-from. Simulation results evaluate the impacts of different parameters on the efficiency metrics and demonstrate that the proposed method can achieve better performance compared with other benchmark methods in terms of energy efficiency.
{"title":"Secure and Energy-Efficient Offloading and Resource Allocation in a NOMA-Based MEC Network","authors":"Qun Wang, Han Hu, Haijian Sun, R. Hu","doi":"10.1109/SEC50012.2020.00063","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00063","url":null,"abstract":"Energy efficiency and security are two critical issues for mobile edge computing (MEC) networks. With stochastic task arrivals, time-varying dynamic environment, and passive existing attackers, it is very challenging to offload computation tasks securely and efficiently. In this paper, we study the task offloading and resource allocation problem in a non-orthogonal multiple access (NOMA) assisted MEC network with security and energy efficiency considerations. To tackle the problem, a dynamic secure task offloading and resource allocation algorithm is proposed based on Lyapunov optimization theory. A stochastic non-convex problem is formulated to jointly optimize the local-CPU frequency and transmit power, aiming at maximizing the network energy efficiency, which is defined as the ratio of the long-term average secure rate to the long-term average power consumption of all users. The formulated problem is decomposed into the deterministic sub-problems in each time slot. The optimal local CPU-cycle and the transmit power of each user can be given in the closed-from. Simulation results evaluate the impacts of different parameters on the efficiency metrics and demonstrate that the proposed method can achieve better performance compared with other benchmark methods in terms of energy efficiency.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"751 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122975875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/SEC50012.2020.00020
Yue Zhang, Christopher Stewart
Internet services are increasingly pushed from the remote cloud to the edge sites close to data sources to offer fast response time and low energy footprint. However, software deployed at edge sites must be updated frequently. Performing updates as soon as they are available consumes a large amount of energy. Configuration management tools that install software updates and manage allowed staleness can inflate energy demands, especially when updates interrupt idle periods at the edge site and block processors from entering power-saving modes. Our research studies configuration management policies, their effect on energy footprint and strategies to optimize them. We have observed that policies yielding low energy footprint differ from site to site and over time. We propose a data-driven approach that uses data collected at each edge site to predict an energy-efficient policy and also guards against worst-case performance if data-driven predictions error occurs. We use a novel randomwalk approach to manage data-driven policies that yield a low footprint for a representative trace of updates observed at an edge site. We are setting up 4 edge service benchmarks powered by AI inference to create realistic software update traces.
{"title":"Poster: Configuration Management for Internet Services at the Edge: A Data-Driven Approach","authors":"Yue Zhang, Christopher Stewart","doi":"10.1109/SEC50012.2020.00020","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00020","url":null,"abstract":"Internet services are increasingly pushed from the remote cloud to the edge sites close to data sources to offer fast response time and low energy footprint. However, software deployed at edge sites must be updated frequently. Performing updates as soon as they are available consumes a large amount of energy. Configuration management tools that install software updates and manage allowed staleness can inflate energy demands, especially when updates interrupt idle periods at the edge site and block processors from entering power-saving modes. Our research studies configuration management policies, their effect on energy footprint and strategies to optimize them. We have observed that policies yielding low energy footprint differ from site to site and over time. We propose a data-driven approach that uses data collected at each edge site to predict an energy-efficient policy and also guards against worst-case performance if data-driven predictions error occurs. We use a novel randomwalk approach to manage data-driven policies that yield a low footprint for a representative trace of updates observed at an edge site. We are setting up 4 edge service benchmarks powered by AI inference to create realistic software update traces.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129833781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/SEC50012.2020.00058
Michael R. Clark, Peter Swartz, Andrew Alten, Raed M. Salih
Image extraction attacks on machine learning models seek to recover semantically meaningful training imagery from a trained classifier model. Such attacks are concerning because training data include sensitive information. Research has shown that extracting training images is generally much harder than model inversion, which attempts to duplicate the functionality of the model. In this paper, we use the RBF SVM classifier to show that we can extract individual training images from models trained on thousands of images, which refutes the notion that these attacks can only extract an “average” of each class. Also, we correct common misperceptions about black-box image extraction attacks and developing a deep understanding of why some trained models are vulnerable to our attack while others are not. Our work is the first to show semantically meaningful images extracted from the RBF SVM classifier.Ccs Concepts•Computing methodologies~Machine learning~Machine learning approaches~Logical and relational learning•Security and privacy ~Systems security~Vulnerability management
{"title":"Toward Black-box Image Extraction Attacks on RBF SVM Classification Model","authors":"Michael R. Clark, Peter Swartz, Andrew Alten, Raed M. Salih","doi":"10.1109/SEC50012.2020.00058","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00058","url":null,"abstract":"Image extraction attacks on machine learning models seek to recover semantically meaningful training imagery from a trained classifier model. Such attacks are concerning because training data include sensitive information. Research has shown that extracting training images is generally much harder than model inversion, which attempts to duplicate the functionality of the model. In this paper, we use the RBF SVM classifier to show that we can extract individual training images from models trained on thousands of images, which refutes the notion that these attacks can only extract an “average” of each class. Also, we correct common misperceptions about black-box image extraction attacks and developing a deep understanding of why some trained models are vulnerable to our attack while others are not. Our work is the first to show semantically meaningful images extracted from the RBF SVM classifier.Ccs Concepts•Computing methodologies~Machine learning~Machine learning approaches~Logical and relational learning•Security and privacy ~Systems security~Vulnerability management","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127889125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/SEC50012.2020.00066
Daniel Kanba Tapang, Siqi Huang, Xueqing Huang
Mobile devices make up the bulk of clients that stream video content over the internet. Improving one of the most popular services, i.e., mobile video streaming, has the potential to make the most market impact. Video streaming giants like YouTube, Netflix, Hulu, and Amazon video aim to provide the best quality service and expand market share. The problem of selecting the best server is critical for ensuring the qualified experience for video streaming on a mobile device. Traditional server selection strategies use proximity as a server selection rule. Improved strategies select servers by considering more factors that also impact the quality of experience (QoE). Currently, reinforcement learning is being used to maximize QoE when selecting servers. This paper seeks to further develop an RL agent that performs better on mobile devices. The result is an RL agent that quickly learns to select servers that offer the best QoE.
{"title":"QoE-Based Server Selection for Mobile Video Streaming","authors":"Daniel Kanba Tapang, Siqi Huang, Xueqing Huang","doi":"10.1109/SEC50012.2020.00066","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00066","url":null,"abstract":"Mobile devices make up the bulk of clients that stream video content over the internet. Improving one of the most popular services, i.e., mobile video streaming, has the potential to make the most market impact. Video streaming giants like YouTube, Netflix, Hulu, and Amazon video aim to provide the best quality service and expand market share. The problem of selecting the best server is critical for ensuring the qualified experience for video streaming on a mobile device. Traditional server selection strategies use proximity as a server selection rule. Improved strategies select servers by considering more factors that also impact the quality of experience (QoE). Currently, reinforcement learning is being used to maximize QoE when selecting servers. This paper seeks to further develop an RL agent that performs better on mobile devices. The result is an RL agent that quickly learns to select servers that offer the best QoE.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133738568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/SEC50012.2020.00026
Dawei Liu, A. Al-Bayatti, Wei Wang
Distance based positioning methods have been widely used in today’s wireless networks for positioning network users. In this paper, we present a study on distance based positioning at network edges. We show that existing methods may not be able to find the optimal position at network edges due to the presence of measurement noise and the use of biased estimation. To handle this problem, we propose an improvement on the estimation method. Simulation results show that the proposed improvement can reduce position error by 30% in 20% of a network area.
{"title":"Poster: An Improvement on Distance based Positioning on Network Edges","authors":"Dawei Liu, A. Al-Bayatti, Wei Wang","doi":"10.1109/SEC50012.2020.00026","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00026","url":null,"abstract":"Distance based positioning methods have been widely used in today’s wireless networks for positioning network users. In this paper, we present a study on distance based positioning at network edges. We show that existing methods may not be able to find the optimal position at network edges due to the presence of measurement noise and the use of biased estimation. To handle this problem, we propose an improvement on the estimation method. Simulation results show that the proposed improvement can reduce position error by 30% in 20% of a network area.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128612322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}