首页 > 最新文献

2020 IEEE/ACM Symposium on Edge Computing (SEC)最新文献

英文 中文
Rumor Detection of COVID-19 Pandemic on Online Social Networks 基于社交网络的新冠肺炎疫情谣言检测
Pub Date : 2020-11-01 DOI: 10.1109/SEC50012.2020.00055
Anqi Shi, Zheng Qu, Qingyao Jia, Chen Lyu
The new coronavirus epidemic (COVID-19) has received widespread attention, causing the health crisis across the world. Massive information about the COVID-19 has emerged on social networks. However, not all information disseminated on social networks is true and reliable. In response to the COVID-19 pandemic, only real information is valuable to the authorities and the public. Therefore, it is an essential task to detect rumors of the COVID-19 on social networks. In this paper, we attempt to solve this problem by using an approach of machine learning on the platform of Weibo. First, we extract text characteristics, user-related features, interaction-based features, and emotion-based features from the spread messages of the COVID-19. Second, by combining these four types of features, we design an intelligent rumor detection model with the technique of ensemble learning. Finally, we conduct extensive experiments on the collected data from Weibo. Experimental results indicate that our model can significantly improve the accuracy of rumor detection, with an accuracy rate of 91% and an AUC value of 0.96.
新型冠状病毒(COVID-19)疫情引起了广泛关注,在全球范围内引发了健康危机。社交网络上出现了大量有关新冠肺炎的信息。然而,并非所有在社交网络上传播的信息都是真实可靠的。在应对新冠肺炎大流行的过程中,只有真实的信息对当局和公众才有价值。因此,检测社交网络上的新冠肺炎谣言是一项必不可少的任务。在本文中,我们尝试在微博平台上使用机器学习的方法来解决这个问题。首先,从COVID-19传播信息中提取文本特征、用户相关特征、基于交互的特征和基于情感的特征。其次,结合这四种特征,采用集成学习技术设计了智能谣言检测模型。最后,我们对从微博上收集的数据进行了广泛的实验。实验结果表明,我们的模型可以显著提高谣言检测的准确率,准确率达到91%,AUC值为0.96。
{"title":"Rumor Detection of COVID-19 Pandemic on Online Social Networks","authors":"Anqi Shi, Zheng Qu, Qingyao Jia, Chen Lyu","doi":"10.1109/SEC50012.2020.00055","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00055","url":null,"abstract":"The new coronavirus epidemic (COVID-19) has received widespread attention, causing the health crisis across the world. Massive information about the COVID-19 has emerged on social networks. However, not all information disseminated on social networks is true and reliable. In response to the COVID-19 pandemic, only real information is valuable to the authorities and the public. Therefore, it is an essential task to detect rumors of the COVID-19 on social networks. In this paper, we attempt to solve this problem by using an approach of machine learning on the platform of Weibo. First, we extract text characteristics, user-related features, interaction-based features, and emotion-based features from the spread messages of the COVID-19. Second, by combining these four types of features, we design an intelligent rumor detection model with the technique of ensemble learning. Finally, we conduct extensive experiments on the collected data from Weibo. Experimental results indicate that our model can significantly improve the accuracy of rumor detection, with an accuracy rate of 91% and an AUC value of 0.96.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125012820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
NanoLambda: Implementing Functions as a Service at All Resource Scales for the Internet of Things. NanoLambda:在物联网的所有资源尺度上实现功能即服务。
Pub Date : 2020-11-01 DOI: 10.1109/SEC50012.2020.00035
Gareth George, F. Bakir, R. Wolski, C. Krintz
Internet of Things (IoT) devices are becoming increasingly prevalent in our environment, yet the process of programming these devices and processing the data they produce remains difficult. Typically, data is processed on device, involving arduous work in low level languages, or data is moved to the cloud, where abundant resources are available for Functions as a Service (FaaS) or other handlers. FaaS is an emerging category of flexible computing services, where developers deploy self-contained functions to be run in portable and secure containerized environments; however, at the moment, these functions are limited to running in the cloud or in some cases at the “edge” of the network using resource rich, Linux-based systems.In this paper, we present NanoLambda, a portable platform that brings FaaS, high-level language programming, and familiar cloud service APIs to non-Linux and microcontroller-based IoT devices. To enable this, NanoLambda couples a new, minimal Python runtime system that we have designed for the least capable end of the IoT device spectrum, with API compatibility for AWS Lambda and S3. NanoLambda transfers functions between IoT devices (sensors, edge, cloud), providing power and latency savings while retaining the programmer productivity benefits of high-level languages and FaaS. A key feature of NanoLambda is a scheduler that intelligently places function executions across multi-scale IoT deployments according to resource availability and power constraints. We evaluate a range of applications that use NanoLambda to run on devices as small as the ESP8266 with 64KB of ram and 512KB flash storage.
物联网(IoT)设备在我们的环境中变得越来越普遍,但对这些设备进行编程和处理它们产生的数据的过程仍然很困难。通常,数据是在设备上处理的,这涉及到用低级语言进行艰苦的工作,或者数据被移动到云上,那里有丰富的资源可用于功能即服务(FaaS)或其他处理程序。FaaS是一种新兴的灵活计算服务类别,开发人员在其中部署自包含的功能,以便在可移植和安全的容器化环境中运行;然而,目前,这些功能仅限于在云中运行,或者在某些情况下,使用资源丰富的基于linux的系统在网络的“边缘”运行。在本文中,我们介绍了NanoLambda,这是一个便携式平台,将FaaS,高级语言编程和熟悉的云服务api引入非linux和基于微控制器的物联网设备。为了实现这一点,NanoLambda结合了一个新的、最小的Python运行时系统,该系统是我们为物联网设备频谱中功能最弱的一端设计的,具有AWS Lambda和S3的API兼容性。NanoLambda在物联网设备(传感器、边缘、云)之间传输功能,提供功耗和延迟节省,同时保留高级语言和FaaS的程序员生产力优势。NanoLambda的一个关键特性是调度程序,它可以根据资源可用性和功率限制智能地在多规模物联网部署中执行功能。我们评估了一系列使用NanoLambda的应用程序,这些应用程序可以在小到具有64KB ram和512KB闪存的ESP8266设备上运行。
{"title":"NanoLambda: Implementing Functions as a Service at All Resource Scales for the Internet of Things.","authors":"Gareth George, F. Bakir, R. Wolski, C. Krintz","doi":"10.1109/SEC50012.2020.00035","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00035","url":null,"abstract":"Internet of Things (IoT) devices are becoming increasingly prevalent in our environment, yet the process of programming these devices and processing the data they produce remains difficult. Typically, data is processed on device, involving arduous work in low level languages, or data is moved to the cloud, where abundant resources are available for Functions as a Service (FaaS) or other handlers. FaaS is an emerging category of flexible computing services, where developers deploy self-contained functions to be run in portable and secure containerized environments; however, at the moment, these functions are limited to running in the cloud or in some cases at the “edge” of the network using resource rich, Linux-based systems.In this paper, we present NanoLambda, a portable platform that brings FaaS, high-level language programming, and familiar cloud service APIs to non-Linux and microcontroller-based IoT devices. To enable this, NanoLambda couples a new, minimal Python runtime system that we have designed for the least capable end of the IoT device spectrum, with API compatibility for AWS Lambda and S3. NanoLambda transfers functions between IoT devices (sensors, edge, cloud), providing power and latency savings while retaining the programmer productivity benefits of high-level languages and FaaS. A key feature of NanoLambda is a scheduler that intelligently places function executions across multi-scale IoT deployments according to resource availability and power constraints. We evaluate a range of applications that use NanoLambda to run on devices as small as the ESP8266 with 64KB of ram and 512KB flash storage.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130080464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
GLAMAR: Geo-Location Assisted Mobile Augmented Reality for Industrial Automation GLAMAR:用于工业自动化的地理定位辅助移动增强现实
Pub Date : 2020-11-01 DOI: 10.1109/SEC50012.2020.00036
M. Uddin, S. Mukherjee, M. Kodialam, T. Lakshman
Mobile Augmented Reality (MAR) is going to play an important role in industrial automation. In order to tag a physical object in the MAR world, a smart phone running MAR-based applications must know the precise location of an object in the real world. Tracking and localizing a large number of objects in an industrial environment can become a huge burden for the smart phone due to compute and battery requirements. In this paper we propose GLAMAR, a novel framework that leverages externally provided geo-location of the objects and IMU sensor information (both of which can be noisy) from the objects to 10-cate them precisely in the MAR world. GLAMAR offloads heavy-duty computation to the edge and supports building MAR-based applications using commercial development packages. We develop a regenerative particle filter and a continuously improving transformation matrix computation methodology to dramatically improve the positional accuracy of objects in the real and the AR world. Our prototype implementation on Android platform using ARCore shows the practicality of GLAMAR in developing MAR-based applications with high precision, efficiency, and more realistic experience. GLAMAR is able to achieve less then 10cm error compared to the ground truth for both stationary and moving objects and reduces the CPU overhead by 83% and battery consumption by 80% for mobile devices.
移动增强现实技术(MAR)将在工业自动化中发挥重要作用。为了标记MAR世界中的物理对象,运行基于MAR的应用程序的智能手机必须知道对象在现实世界中的精确位置。由于计算和电池需求,在工业环境中跟踪和定位大量对象可能成为智能手机的巨大负担。在本文中,我们提出了GLAMAR,这是一个新的框架,它利用外部提供的物体的地理位置和来自物体的IMU传感器信息(两者都可能有噪声),在MAR世界中精确地对它们进行定位。GLAMAR将繁重的计算任务转移到边缘,并支持使用商业开发包构建基于mar的应用程序。我们开发了再生粒子滤波器和不断改进的变换矩阵计算方法,以显着提高现实世界和AR世界中物体的位置精度。我们使用ARCore在Android平台上的原型实现显示了GLAMAR在开发基于mar的应用程序方面的实用性,具有高精度、高效率和更真实的体验。与静止和移动物体相比,GLAMAR能够实现小于10厘米的误差,并将CPU开销减少83%,移动设备的电池消耗减少80%。
{"title":"GLAMAR: Geo-Location Assisted Mobile Augmented Reality for Industrial Automation","authors":"M. Uddin, S. Mukherjee, M. Kodialam, T. Lakshman","doi":"10.1109/SEC50012.2020.00036","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00036","url":null,"abstract":"Mobile Augmented Reality (MAR) is going to play an important role in industrial automation. In order to tag a physical object in the MAR world, a smart phone running MAR-based applications must know the precise location of an object in the real world. Tracking and localizing a large number of objects in an industrial environment can become a huge burden for the smart phone due to compute and battery requirements. In this paper we propose GLAMAR, a novel framework that leverages externally provided geo-location of the objects and IMU sensor information (both of which can be noisy) from the objects to 10-cate them precisely in the MAR world. GLAMAR offloads heavy-duty computation to the edge and supports building MAR-based applications using commercial development packages. We develop a regenerative particle filter and a continuously improving transformation matrix computation methodology to dramatically improve the positional accuracy of objects in the real and the AR world. Our prototype implementation on Android platform using ARCore shows the practicality of GLAMAR in developing MAR-based applications with high precision, efficiency, and more realistic experience. GLAMAR is able to achieve less then 10cm error compared to the ground truth for both stationary and moving objects and reduces the CPU overhead by 83% and battery consumption by 80% for mobile devices.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121104074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Federated Learning with Heterogeneous Quantization 异构量化的联邦学习
Pub Date : 2020-11-01 DOI: 10.1109/SEC50012.2020.00060
Cong Shen, Shengbo Chen
Quantization of local model updates before uploading to the parameter server is a primary solution to reduce the communication overhead in federated learning. However, prior literature always assumes homogeneous quantization for all clients, while in reality devices are heterogeneous and they support different levels of quantization precision. This heterogeneity of quantization poses a new challenge: fine-quantized model updates are more accurate than coarse-quantized ones, and how to optimally aggregate them at the server is an unsolved problem. In this paper, we propose FEDHQ: Federated Learning with Heterogeneous Quantization. In particular, FEDHQ allocates different weights to clients by minimizing the convergence rate upper bound, which is a function of quantization errors of all clients. We derive the convergence rate of FEDHQ under strongly convex loss functions. To further accelerate the convergence, the instantaneous quantization error is computed and piggybacked when each client uploads the local model update, and the server dynamically calculates the weight accordingly for the current round. Numerical experiments demonstrate the performance advantages of FEDHQ+ over conventional FEDAVG with standard equal weights and a heuristic scheme which assigns weights linearly proportional to the clients’ quantization precision.
在上传到参数服务器之前对本地模型更新进行量化是减少联邦学习中通信开销的主要解决方案。然而,先前的文献总是假设所有客户端都是同质量化,而实际上设备是异构的,它们支持不同级别的量化精度。这种量化的异质性带来了新的挑战:精细量化的模型更新比粗量化的模型更新更准确,而如何在服务器上优化聚合它们是一个未解决的问题。本文提出了FEDHQ:基于异构量化的联邦学习。其中,FEDHQ通过最小化收敛率上界来为客户分配不同的权重,该上界是所有客户量化误差的函数。得到了FEDHQ在强凸损失函数下的收敛速率。为了进一步加快收敛速度,在每个客户端上传本地模型更新时计算并承载瞬时量化误差,服务器动态计算当前回合的权重。数值实验表明,FEDHQ+的性能优于传统的FEDAVG,该FEDAVG具有标准等权和启发式方案,该方案根据客户端的量化精度线性比例分配权重。
{"title":"Federated Learning with Heterogeneous Quantization","authors":"Cong Shen, Shengbo Chen","doi":"10.1109/SEC50012.2020.00060","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00060","url":null,"abstract":"Quantization of local model updates before uploading to the parameter server is a primary solution to reduce the communication overhead in federated learning. However, prior literature always assumes homogeneous quantization for all clients, while in reality devices are heterogeneous and they support different levels of quantization precision. This heterogeneity of quantization poses a new challenge: fine-quantized model updates are more accurate than coarse-quantized ones, and how to optimally aggregate them at the server is an unsolved problem. In this paper, we propose FEDHQ: Federated Learning with Heterogeneous Quantization. In particular, FEDHQ allocates different weights to clients by minimizing the convergence rate upper bound, which is a function of quantization errors of all clients. We derive the convergence rate of FEDHQ under strongly convex loss functions. To further accelerate the convergence, the instantaneous quantization error is computed and piggybacked when each client uploads the local model update, and the server dynamically calculates the weight accordingly for the current round. Numerical experiments demonstrate the performance advantages of FEDHQ+ over conventional FEDAVG with standard equal weights and a heuristic scheme which assigns weights linearly proportional to the clients’ quantization precision.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121781804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Chatbot Security and Privacy in the Age of Personal Assistants 个人助理时代的聊天机器人安全和隐私
Pub Date : 2020-11-01 DOI: 10.1109/SEC50012.2020.00057
Winson Ye, Qun Li
The rise of personal assistants serves as a testament to the growing popularity of chatbots. However, as the field advances, it is important for the conversational AI community to keep in mind any potential vulnerabilities in existing architectures and how attackers could take advantantage of them. Towards this end, we present a survey of existing dialogue system vulnerabilities in security and privacy. We define chatbot security and give some background regarding the state of the art in the field. This analysis features a comprehensive description of potential attacks of each module in a typical chatbot architecture: the client module, communication module, response generation module, and database module.
个人助理的兴起证明了聊天机器人的日益普及。然而,随着该领域的发展,对话式人工智能社区必须牢记现有架构中的任何潜在漏洞,以及攻击者如何利用这些漏洞。为此,我们对现有对话系统在安全和隐私方面的漏洞进行了调查。我们定义了聊天机器人的安全性,并给出了有关该领域最新技术的一些背景。该分析全面描述了典型聊天机器人体系结构中每个模块的潜在攻击:客户端模块、通信模块、响应生成模块和数据库模块。
{"title":"Chatbot Security and Privacy in the Age of Personal Assistants","authors":"Winson Ye, Qun Li","doi":"10.1109/SEC50012.2020.00057","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00057","url":null,"abstract":"The rise of personal assistants serves as a testament to the growing popularity of chatbots. However, as the field advances, it is important for the conversational AI community to keep in mind any potential vulnerabilities in existing architectures and how attackers could take advantantage of them. Towards this end, we present a survey of existing dialogue system vulnerabilities in security and privacy. We define chatbot security and give some background regarding the state of the art in the field. This analysis features a comprehensive description of potential attacks of each module in a typical chatbot architecture: the client module, communication module, response generation module, and database module.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123101967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Secure and Energy-Efficient Offloading and Resource Allocation in a NOMA-Based MEC Network 基于noma的MEC网络安全节能卸载与资源分配
Pub Date : 2020-11-01 DOI: 10.1109/SEC50012.2020.00063
Qun Wang, Han Hu, Haijian Sun, R. Hu
Energy efficiency and security are two critical issues for mobile edge computing (MEC) networks. With stochastic task arrivals, time-varying dynamic environment, and passive existing attackers, it is very challenging to offload computation tasks securely and efficiently. In this paper, we study the task offloading and resource allocation problem in a non-orthogonal multiple access (NOMA) assisted MEC network with security and energy efficiency considerations. To tackle the problem, a dynamic secure task offloading and resource allocation algorithm is proposed based on Lyapunov optimization theory. A stochastic non-convex problem is formulated to jointly optimize the local-CPU frequency and transmit power, aiming at maximizing the network energy efficiency, which is defined as the ratio of the long-term average secure rate to the long-term average power consumption of all users. The formulated problem is decomposed into the deterministic sub-problems in each time slot. The optimal local CPU-cycle and the transmit power of each user can be given in the closed-from. Simulation results evaluate the impacts of different parameters on the efficiency metrics and demonstrate that the proposed method can achieve better performance compared with other benchmark methods in terms of energy efficiency.
能源效率和安全性是移动边缘计算(MEC)网络的两个关键问题。由于任务到达是随机的,动态环境是时变的,攻击者是被动存在的,如何安全高效地卸载计算任务是一个很大的挑战。本文研究了基于安全和能效考虑的非正交多址(NOMA)辅助MEC网络中的任务卸载和资源分配问题。针对这一问题,提出了一种基于李亚普诺夫优化理论的动态安全任务卸载和资源分配算法。提出了一个随机非凸问题,共同优化局部cpu频率和发射功率,以最大化网络能量效率为目标,将其定义为所有用户长期平均安全率与长期平均功耗之比。将公式问题分解为每个时隙的确定性子问题。最优的本地cpu周期和每个用户的发射功率可以在闭环中给出。仿真结果评估了不同参数对效率指标的影响,并证明了该方法在能效方面比其他基准方法具有更好的性能。
{"title":"Secure and Energy-Efficient Offloading and Resource Allocation in a NOMA-Based MEC Network","authors":"Qun Wang, Han Hu, Haijian Sun, R. Hu","doi":"10.1109/SEC50012.2020.00063","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00063","url":null,"abstract":"Energy efficiency and security are two critical issues for mobile edge computing (MEC) networks. With stochastic task arrivals, time-varying dynamic environment, and passive existing attackers, it is very challenging to offload computation tasks securely and efficiently. In this paper, we study the task offloading and resource allocation problem in a non-orthogonal multiple access (NOMA) assisted MEC network with security and energy efficiency considerations. To tackle the problem, a dynamic secure task offloading and resource allocation algorithm is proposed based on Lyapunov optimization theory. A stochastic non-convex problem is formulated to jointly optimize the local-CPU frequency and transmit power, aiming at maximizing the network energy efficiency, which is defined as the ratio of the long-term average secure rate to the long-term average power consumption of all users. The formulated problem is decomposed into the deterministic sub-problems in each time slot. The optimal local CPU-cycle and the transmit power of each user can be given in the closed-from. Simulation results evaluate the impacts of different parameters on the efficiency metrics and demonstrate that the proposed method can achieve better performance compared with other benchmark methods in terms of energy efficiency.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"751 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122975875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Poster: Configuration Management for Internet Services at the Edge: A Data-Driven Approach 海报:边缘互联网服务的配置管理:数据驱动的方法
Pub Date : 2020-11-01 DOI: 10.1109/SEC50012.2020.00020
Yue Zhang, Christopher Stewart
Internet services are increasingly pushed from the remote cloud to the edge sites close to data sources to offer fast response time and low energy footprint. However, software deployed at edge sites must be updated frequently. Performing updates as soon as they are available consumes a large amount of energy. Configuration management tools that install software updates and manage allowed staleness can inflate energy demands, especially when updates interrupt idle periods at the edge site and block processors from entering power-saving modes. Our research studies configuration management policies, their effect on energy footprint and strategies to optimize them. We have observed that policies yielding low energy footprint differ from site to site and over time. We propose a data-driven approach that uses data collected at each edge site to predict an energy-efficient policy and also guards against worst-case performance if data-driven predictions error occurs. We use a novel randomwalk approach to manage data-driven policies that yield a low footprint for a representative trace of updates observed at an edge site. We are setting up 4 edge service benchmarks powered by AI inference to create realistic software update traces.
互联网服务越来越多地从远程云推送到靠近数据源的边缘站点,以提供快速响应时间和低能耗。但是,部署在边缘站点的软件必须经常更新。一旦可用就立即执行更新会消耗大量的能源。安装软件更新和管理允许过期的配置管理工具可能会增加能源需求,特别是当更新中断边缘站点的空闲时间并阻止处理器进入节能模式时。我们的研究研究了配置管理策略,它们对能源足迹的影响以及优化它们的策略。我们观察到,产生低能源足迹的政策因地点和时间而异。我们提出了一种数据驱动的方法,该方法使用在每个边缘站点收集的数据来预测节能策略,并且如果数据驱动的预测发生错误,还可以防止最坏情况的性能。我们使用一种新颖的随机行走方法来管理数据驱动的策略,该策略为在边缘站点观察到的具有代表性的更新跟踪产生低足迹。我们正在设置4个由人工智能推理驱动的边缘服务基准,以创建逼真的软件更新痕迹。
{"title":"Poster: Configuration Management for Internet Services at the Edge: A Data-Driven Approach","authors":"Yue Zhang, Christopher Stewart","doi":"10.1109/SEC50012.2020.00020","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00020","url":null,"abstract":"Internet services are increasingly pushed from the remote cloud to the edge sites close to data sources to offer fast response time and low energy footprint. However, software deployed at edge sites must be updated frequently. Performing updates as soon as they are available consumes a large amount of energy. Configuration management tools that install software updates and manage allowed staleness can inflate energy demands, especially when updates interrupt idle periods at the edge site and block processors from entering power-saving modes. Our research studies configuration management policies, their effect on energy footprint and strategies to optimize them. We have observed that policies yielding low energy footprint differ from site to site and over time. We propose a data-driven approach that uses data collected at each edge site to predict an energy-efficient policy and also guards against worst-case performance if data-driven predictions error occurs. We use a novel randomwalk approach to manage data-driven policies that yield a low footprint for a representative trace of updates observed at an edge site. We are setting up 4 edge service benchmarks powered by AI inference to create realistic software update traces.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129833781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Toward Black-box Image Extraction Attacks on RBF SVM Classification Model 基于RBF SVM分类模型的黑盒图像提取攻击研究
Pub Date : 2020-11-01 DOI: 10.1109/SEC50012.2020.00058
Michael R. Clark, Peter Swartz, Andrew Alten, Raed M. Salih
Image extraction attacks on machine learning models seek to recover semantically meaningful training imagery from a trained classifier model. Such attacks are concerning because training data include sensitive information. Research has shown that extracting training images is generally much harder than model inversion, which attempts to duplicate the functionality of the model. In this paper, we use the RBF SVM classifier to show that we can extract individual training images from models trained on thousands of images, which refutes the notion that these attacks can only extract an “average” of each class. Also, we correct common misperceptions about black-box image extraction attacks and developing a deep understanding of why some trained models are vulnerable to our attack while others are not. Our work is the first to show semantically meaningful images extracted from the RBF SVM classifier.Ccs Concepts•Computing methodologies~Machine learning~Machine learning approaches~Logical and relational learning•Security and privacy ~Systems security~Vulnerability management
对机器学习模型的图像提取攻击寻求从训练好的分类器模型中恢复语义上有意义的训练图像。这种攻击令人担忧,因为训练数据包含敏感信息。研究表明,提取训练图像通常比模型反演困难得多,模型反演试图复制模型的功能。在本文中,我们使用RBF SVM分类器来证明我们可以从数千张图像上训练的模型中提取单个训练图像,这驳斥了这些攻击只能提取每个类的“平均值”的概念。此外,我们纠正了关于黑盒图像提取攻击的常见误解,并深入了解为什么一些训练过的模型容易受到我们的攻击,而另一些则不会。我们的工作是第一个展示从RBF SVM分类器中提取的语义有意义的图像。Ccs概念•计算方法~机器学习~机器学习方法~逻辑和关系学习•安全和隐私~系统安全~漏洞管理
{"title":"Toward Black-box Image Extraction Attacks on RBF SVM Classification Model","authors":"Michael R. Clark, Peter Swartz, Andrew Alten, Raed M. Salih","doi":"10.1109/SEC50012.2020.00058","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00058","url":null,"abstract":"Image extraction attacks on machine learning models seek to recover semantically meaningful training imagery from a trained classifier model. Such attacks are concerning because training data include sensitive information. Research has shown that extracting training images is generally much harder than model inversion, which attempts to duplicate the functionality of the model. In this paper, we use the RBF SVM classifier to show that we can extract individual training images from models trained on thousands of images, which refutes the notion that these attacks can only extract an “average” of each class. Also, we correct common misperceptions about black-box image extraction attacks and developing a deep understanding of why some trained models are vulnerable to our attack while others are not. Our work is the first to show semantically meaningful images extracted from the RBF SVM classifier.Ccs Concepts•Computing methodologies~Machine learning~Machine learning approaches~Logical and relational learning•Security and privacy ~Systems security~Vulnerability management","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127889125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
QoE-Based Server Selection for Mobile Video Streaming 基于qos的移动视频流服务器选择
Pub Date : 2020-11-01 DOI: 10.1109/SEC50012.2020.00066
Daniel Kanba Tapang, Siqi Huang, Xueqing Huang
Mobile devices make up the bulk of clients that stream video content over the internet. Improving one of the most popular services, i.e., mobile video streaming, has the potential to make the most market impact. Video streaming giants like YouTube, Netflix, Hulu, and Amazon video aim to provide the best quality service and expand market share. The problem of selecting the best server is critical for ensuring the qualified experience for video streaming on a mobile device. Traditional server selection strategies use proximity as a server selection rule. Improved strategies select servers by considering more factors that also impact the quality of experience (QoE). Currently, reinforcement learning is being used to maximize QoE when selecting servers. This paper seeks to further develop an RL agent that performs better on mobile devices. The result is an RL agent that quickly learns to select servers that offer the best QoE.
移动设备构成了通过互联网传输视频内容的大部分客户端。改进最受欢迎的服务之一,即移动视频流,有可能产生最大的市场影响。YouTube、Netflix、Hulu和亚马逊视频等视频流媒体巨头的目标是提供最优质的服务,扩大市场份额。选择最佳服务器的问题对于确保移动设备上视频流的合格体验至关重要。传统的服务器选择策略使用邻近性作为服务器选择规则。改进的策略通过考虑更多影响体验质量(QoE)的因素来选择服务器。目前,在选择服务器时,强化学习被用于最大化QoE。本文旨在进一步开发一个在移动设备上表现更好的RL代理。结果是RL代理可以快速学习选择提供最佳QoE的服务器。
{"title":"QoE-Based Server Selection for Mobile Video Streaming","authors":"Daniel Kanba Tapang, Siqi Huang, Xueqing Huang","doi":"10.1109/SEC50012.2020.00066","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00066","url":null,"abstract":"Mobile devices make up the bulk of clients that stream video content over the internet. Improving one of the most popular services, i.e., mobile video streaming, has the potential to make the most market impact. Video streaming giants like YouTube, Netflix, Hulu, and Amazon video aim to provide the best quality service and expand market share. The problem of selecting the best server is critical for ensuring the qualified experience for video streaming on a mobile device. Traditional server selection strategies use proximity as a server selection rule. Improved strategies select servers by considering more factors that also impact the quality of experience (QoE). Currently, reinforcement learning is being used to maximize QoE when selecting servers. This paper seeks to further develop an RL agent that performs better on mobile devices. The result is an RL agent that quickly learns to select servers that offer the best QoE.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133738568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Poster: An Improvement on Distance based Positioning on Network Edges 海报:一种基于网络边缘距离定位的改进
Pub Date : 2020-11-01 DOI: 10.1109/SEC50012.2020.00026
Dawei Liu, A. Al-Bayatti, Wei Wang
Distance based positioning methods have been widely used in today’s wireless networks for positioning network users. In this paper, we present a study on distance based positioning at network edges. We show that existing methods may not be able to find the optimal position at network edges due to the presence of measurement noise and the use of biased estimation. To handle this problem, we propose an improvement on the estimation method. Simulation results show that the proposed improvement can reduce position error by 30% in 20% of a network area.
基于距离的定位方法已广泛应用于当今无线网络中的定位网络用户。本文提出了一种基于距离的网络边缘定位方法。我们表明,由于存在测量噪声和使用有偏估计,现有方法可能无法在网络边缘找到最佳位置。为了解决这个问题,我们提出了一种改进的估计方法。仿真结果表明,该方法可以在20%的网络面积内将定位误差降低30%。
{"title":"Poster: An Improvement on Distance based Positioning on Network Edges","authors":"Dawei Liu, A. Al-Bayatti, Wei Wang","doi":"10.1109/SEC50012.2020.00026","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00026","url":null,"abstract":"Distance based positioning methods have been widely used in today’s wireless networks for positioning network users. In this paper, we present a study on distance based positioning at network edges. We show that existing methods may not be able to find the optimal position at network edges due to the presence of measurement noise and the use of biased estimation. To handle this problem, we propose an improvement on the estimation method. Simulation results show that the proposed improvement can reduce position error by 30% in 20% of a network area.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128612322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2020 IEEE/ACM Symposium on Edge Computing (SEC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1