首页 > 最新文献

2020 IEEE/ACM Symposium on Edge Computing (SEC)最新文献

英文 中文
CHA: A Caching Framework for Home-based Voice Assistant Systems CHA:基于家庭的语音助手系统的缓存框架
Pub Date : 2020-11-01 DOI: 10.1109/SEC50012.2020.00041
Lanyu Xu, A. Iyengar, Weisong Shi
Voice assistant systems are becoming immersive in our daily lives nowadays. However, current voice assistant systems rely on the cloud for command understanding and fulfillment, resulting in unstable performance and unnecessary frequent network transmission. In this paper, we introduce CHA, an edge-based caching framework for voice assistant systems, and especially for smart homes where resource-restricted edge devices can be deployed. Located between the voice assistant device and the cloud, CHA introduces a layered architecture with modular design in each layer. By introducing an understanding module and adaptive learning, CHA understands the user’s intent with high accuracy. By maintaining a cache, CHA reduces the interaction with the cloud and provides fast and stable responses in a smart home. Targeting on resource-constrained edge devices, CHA uses joint classification and model pruning on a pre-trained language model to achieve performance and system efficiency. We compare CHA to the status quo solution of voice assistant systems and show that CHA benefits voice assistant systems. We evaluate CHA on three edge devices that differ in hardware configuration and demonstrate its ability to meet the latency and accuracy demands with efficient resource utilization. Our evaluation shows that compared to the current solution for voice assistant systems, CHA can provide at least 70% speedup in responses for frequently asked voice commands with less than 13% CPU consumption, and less than 9% memory consumption when running on a Raspberry Pi.
如今,语音助手系统正逐渐融入我们的日常生活。然而,目前的语音助理系统依赖于云来理解和实现命令,导致性能不稳定和不必要的频繁网络传输。在本文中,我们介绍CHA,一种基于边缘的缓存框架,用于语音助理系统,特别是用于可以部署资源受限边缘设备的智能家居。CHA位于语音助手设备和云之间,采用分层架构,每层采用模块化设计。通过引入理解模块和自适应学习,CHA可以高精度地理解用户的意图。通过维护缓存,CHA减少了与云的交互,并在智能家居中提供快速稳定的响应。针对资源受限的边缘设备,CHA在预训练的语言模型上使用联合分类和模型剪枝来实现性能和系统效率。我们将CHA与语音助理系统的现状解决方案进行了比较,并表明CHA有利于语音助理系统。我们在三个硬件配置不同的边缘设备上评估了CHA,并展示了其在有效利用资源的情况下满足延迟和准确性需求的能力。我们的评估表明,与当前语音助理系统的解决方案相比,CHA可以在响应频繁询问的语音命令时提供至少70%的加速,CPU消耗低于13%,在Raspberry Pi上运行时内存消耗低于9%。
{"title":"CHA: A Caching Framework for Home-based Voice Assistant Systems","authors":"Lanyu Xu, A. Iyengar, Weisong Shi","doi":"10.1109/SEC50012.2020.00041","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00041","url":null,"abstract":"Voice assistant systems are becoming immersive in our daily lives nowadays. However, current voice assistant systems rely on the cloud for command understanding and fulfillment, resulting in unstable performance and unnecessary frequent network transmission. In this paper, we introduce CHA, an edge-based caching framework for voice assistant systems, and especially for smart homes where resource-restricted edge devices can be deployed. Located between the voice assistant device and the cloud, CHA introduces a layered architecture with modular design in each layer. By introducing an understanding module and adaptive learning, CHA understands the user’s intent with high accuracy. By maintaining a cache, CHA reduces the interaction with the cloud and provides fast and stable responses in a smart home. Targeting on resource-constrained edge devices, CHA uses joint classification and model pruning on a pre-trained language model to achieve performance and system efficiency. We compare CHA to the status quo solution of voice assistant systems and show that CHA benefits voice assistant systems. We evaluate CHA on three edge devices that differ in hardware configuration and demonstrate its ability to meet the latency and accuracy demands with efficient resource utilization. Our evaluation shows that compared to the current solution for voice assistant systems, CHA can provide at least 70% speedup in responses for frequently asked voice commands with less than 13% CPU consumption, and less than 9% memory consumption when running on a Raspberry Pi.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116798024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Garbage Collection for Edge Computing 边缘计算的垃圾收集
Pub Date : 2020-11-01 DOI: 10.1109/SEC50012.2020.00044
A. García, D. May, E. Nutting
The shift towards edge computing is bringing data processing and storage closer to the edge of the network. This makes it desirable to use productive modern programming languages, like Python and C#, to program edge devices. Modern programming languages mitigate the added complexity of edge computing by abstracting software developers away from tedious tasks like freeing unused memory. But these languages rely on garbage collectors that impose high overheads and introduce unpredictable pauses, so they are rarely used in small embedded systems that make up the majority of edge devices. We propose a novel hardware garbage collector that addresses these problems to unlock the benefits of modern languages in edge devices.
向边缘计算的转变使数据处理和存储更接近网络的边缘。这使得使用高效的现代编程语言(如Python和c#)对边缘设备进行编程成为必要。现代编程语言通过将软件开发人员从释放未使用内存等繁琐的任务中抽象出来,减轻了边缘计算增加的复杂性。但是这些语言依赖于垃圾收集器,这会带来很高的开销,并引入不可预测的暂停,因此它们很少用于构成大多数边缘设备的小型嵌入式系统。我们提出了一种新的硬件垃圾收集器来解决这些问题,从而在边缘设备中释放现代语言的好处。
{"title":"Garbage Collection for Edge Computing","authors":"A. García, D. May, E. Nutting","doi":"10.1109/SEC50012.2020.00044","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00044","url":null,"abstract":"The shift towards edge computing is bringing data processing and storage closer to the edge of the network. This makes it desirable to use productive modern programming languages, like Python and C#, to program edge devices. Modern programming languages mitigate the added complexity of edge computing by abstracting software developers away from tedious tasks like freeing unused memory. But these languages rely on garbage collectors that impose high overheads and introduce unpredictable pauses, so they are rarely used in small embedded systems that make up the majority of edge devices. We propose a novel hardware garbage collector that addresses these problems to unlock the benefits of modern languages in edge devices.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116860655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Exploring Decentralized Collaboration in Heterogeneous Edge Training 探索异构边缘训练中的分散协作
Pub Date : 2020-11-01 DOI: 10.1109/SEC50012.2020.00069
Xiang Chen, Zhuwei Qin
Recent progress in deep learning techniques enabled collaborative edge training, which usually deploys identical neural network models globally on multiple devices for aggregating parameter updates over distributed data collection. However, as more and more heterogeneous edge devices are involved in practical training, the identical model deployment over collaborative edge devices cannot be guaranteed: On one hand, the weak edge devices with less computation resources may not catch up stronger ones’ training progress, and appropriate local model training customization is necessary to balance the collaboration. On the other hand, a particular local edge device may have specific learning task preference, while the global identical model would exceed the practical local demand and cause unnecessary computation cost. Therefore, we explored the collaborative learning with heterogeneous convolutional neural networks (CNNs) in this work, expecting to address aforementioned real problems. Specifically, we proposed a novel decentralized collaborative training method by decoupling a training target CNN model into independently trainable sub-models correspond to a sub-set of learning tasks for each edge device. After sub-models are well-trained on edge nodes, the model parameters for individual learning tasks can be harvested from local models on every edge device and ensemble the global training model back to a single piece. Experiments demonstrate that, for the AlexNet and VGG on the CIFAR10, CIFAR100 and KWS dataset, our decentralized training method can save up to 11.8× less computation load while achieve central sever test accuracy.
深度学习技术的最新进展使协作边缘训练成为可能,它通常在多个设备上部署相同的神经网络模型,以便在分布式数据收集上聚合参数更新。然而,随着实际训练中涉及的异构边缘设备越来越多,协作边缘设备上的模型部署无法保证相同:一方面,计算资源较少的弱边缘设备可能跟不上强边缘设备的训练进度,需要适当的局部模型训练定制来平衡协作。另一方面,特定的局部边缘设备可能具有特定的学习任务偏好,而全局相同模型会超出实际的局部需求,造成不必要的计算成本。因此,我们在这项工作中探索了异构卷积神经网络(cnn)的协同学习,期望解决上述实际问题。具体而言,我们提出了一种新的分散协同训练方法,将训练目标CNN模型解耦为每个边缘设备的学习任务子集对应的独立可训练子模型。在边缘节点上训练好子模型后,可以从每个边缘设备上的局部模型中获取单个学习任务的模型参数,并将全局训练模型集成回单个模型。实验表明,对于CIFAR10、CIFAR100和KWS数据集上的AlexNet和VGG,我们的分散训练方法在达到中心服务器测试精度的同时,可以节省高达11.8倍的计算量。
{"title":"Exploring Decentralized Collaboration in Heterogeneous Edge Training","authors":"Xiang Chen, Zhuwei Qin","doi":"10.1109/SEC50012.2020.00069","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00069","url":null,"abstract":"Recent progress in deep learning techniques enabled collaborative edge training, which usually deploys identical neural network models globally on multiple devices for aggregating parameter updates over distributed data collection. However, as more and more heterogeneous edge devices are involved in practical training, the identical model deployment over collaborative edge devices cannot be guaranteed: On one hand, the weak edge devices with less computation resources may not catch up stronger ones’ training progress, and appropriate local model training customization is necessary to balance the collaboration. On the other hand, a particular local edge device may have specific learning task preference, while the global identical model would exceed the practical local demand and cause unnecessary computation cost. Therefore, we explored the collaborative learning with heterogeneous convolutional neural networks (CNNs) in this work, expecting to address aforementioned real problems. Specifically, we proposed a novel decentralized collaborative training method by decoupling a training target CNN model into independently trainable sub-models correspond to a sub-set of learning tasks for each edge device. After sub-models are well-trained on edge nodes, the model parameters for individual learning tasks can be harvested from local models on every edge device and ensemble the global training model back to a single piece. Experiments demonstrate that, for the AlexNet and VGG on the CIFAR10, CIFAR100 and KWS dataset, our decentralized training method can save up to 11.8× less computation load while achieve central sever test accuracy.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114892210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Demo: EdgeVPN.io: Open-source Virtual Private Network for Seamless Edge Computing with Kubernetes 演示:EdgeVPN。io:用于Kubernetes无缝边缘计算的开源虚拟专用网
Pub Date : 2020-11-01 DOI: 10.1109/SEC50012.2020.00032
R. Figueiredo, Kensworth C. Subratie
Edge and fog computing encompass a variety of technologies that are poised to enable new applications across the Internet that support data capture, storage, processing, and communication across the networking continuum. These environments pose new challenges to the design and implementation of networks-as membership can be dynamic and devices are heterogeneous, widely distributed geographically, and in proximity to end-users, as is the case with mobile and Internet-of-Things (IoT) devices. We present a demonstration of EdgeVPN.io (Evio for short), an open-source programmable, software-defined network that addresses challenges in the deployment of virtual networks spanning distributed edge and cloud resources, in particular highlighting its use in support of the Kubernetes container orchestration middleware. The demo highlights a deployment of unmodified Kubernetes middleware across a virtual cluster comprising virtual machines deployed both in cloud providers, and in distinct networks at the edge-where all nodes are assigned private IP addresses and subject to different NAT (Network Address Translation) middleboxes, connected through an Evio virtual network. The demo includes an overview of the configuration of Kubernetes and Evio nodes and the deployment of Docker-based container pods, highlighting the seamless connectivity for TCP/IP applications deployed on the pods.
边缘计算和雾计算包含了各种各样的技术,这些技术可以使跨互联网的新应用程序支持跨网络连续体的数据捕获、存储、处理和通信。这些环境对网络的设计和实现提出了新的挑战,因为成员可能是动态的,设备是异构的,在地理上分布广泛,靠近最终用户,移动和物联网(IoT)设备就是这种情况。我们展示了EdgeVPN的一个演示。io(简称Evio),一个开源的可编程的、软件定义的网络,它解决了在跨分布式边缘和云资源的虚拟网络部署中的挑战,特别强调了它在支持Kubernetes容器编排中间件方面的使用。该演示演示了在一个虚拟集群上部署未经修改的Kubernetes中间件,该虚拟集群包括部署在云提供商和边缘不同网络中的虚拟机,其中所有节点都被分配了私有IP地址,并受到不同的NAT(网络地址转换)中间件的约束,通过Evio虚拟网络连接。该演示概述了Kubernetes和Evio节点的配置,以及基于docker的容器pod的部署,重点介绍了部署在pod上的TCP/IP应用程序的无缝连接。
{"title":"Demo: EdgeVPN.io: Open-source Virtual Private Network for Seamless Edge Computing with Kubernetes","authors":"R. Figueiredo, Kensworth C. Subratie","doi":"10.1109/SEC50012.2020.00032","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00032","url":null,"abstract":"Edge and fog computing encompass a variety of technologies that are poised to enable new applications across the Internet that support data capture, storage, processing, and communication across the networking continuum. These environments pose new challenges to the design and implementation of networks-as membership can be dynamic and devices are heterogeneous, widely distributed geographically, and in proximity to end-users, as is the case with mobile and Internet-of-Things (IoT) devices. We present a demonstration of EdgeVPN.io (Evio for short), an open-source programmable, software-defined network that addresses challenges in the deployment of virtual networks spanning distributed edge and cloud resources, in particular highlighting its use in support of the Kubernetes container orchestration middleware. The demo highlights a deployment of unmodified Kubernetes middleware across a virtual cluster comprising virtual machines deployed both in cloud providers, and in distinct networks at the edge-where all nodes are assigned private IP addresses and subject to different NAT (Network Address Translation) middleboxes, connected through an Evio virtual network. The demo includes an overview of the configuration of Kubernetes and Evio nodes and the deployment of Docker-based container pods, highlighting the seamless connectivity for TCP/IP applications deployed on the pods.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"27 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116425387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Demo: Emulating Geo-Distributed Fog Services 演示:模拟地理分布式雾服务
Pub Date : 2020-11-01 DOI: 10.1109/SEC50012.2020.00031
Moysis Symeonides, Z. Georgiou, Demetris Trihinas, G. Pallis, M. Dikaiakos
For more than the better parts of the last decades, we are witnessing the proliferation of IoT devices, as well as an exponential growth in the volume of data generated outside of datacenters. With the generated data at the extremes of the network and the restricted device-to-cloud bandwidth, data mitigation is becoming the major barrier of cloud-based IoT services [1]. To alleviate these challenges, Fog Computing extends the Cloud’s capabilities closer to IoT devices.
在过去几十年的大部分时间里,我们目睹了物联网设备的激增,以及数据中心以外生成的数据量呈指数级增长。由于生成的数据位于网络的极端位置,且设备到云的带宽有限,数据缓解正成为基于云的物联网服务[1]的主要障碍。为了缓解这些挑战,雾计算将云的功能扩展到更接近物联网设备。
{"title":"Demo: Emulating Geo-Distributed Fog Services","authors":"Moysis Symeonides, Z. Georgiou, Demetris Trihinas, G. Pallis, M. Dikaiakos","doi":"10.1109/SEC50012.2020.00031","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00031","url":null,"abstract":"For more than the better parts of the last decades, we are witnessing the proliferation of IoT devices, as well as an exponential growth in the volume of data generated outside of datacenters. With the generated data at the extremes of the network and the restricted device-to-cloud bandwidth, data mitigation is becoming the major barrier of cloud-based IoT services [1]. To alleviate these challenges, Fog Computing extends the Cloud’s capabilities closer to IoT devices.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117082646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Poster: An Accelerator for Fast Container-based Applications Deployment on the Edge 海报:在边缘上快速部署基于容器的应用程序的加速器
Pub Date : 2020-11-01 DOI: 10.1109/SEC50012.2020.00027
Jun Lin Chen, D. Liaqat, Moshe Gabel, E. D. Lara
Containers are an emerging approach for application deployment on the edge, as they are modular, lightweight, and easy to use for development and maintenance. However, deploying containers in an edge computing environment brings new challenges: high latency links, limited resources, and user mobility. This work proposes a new edge deployment architecture that accelerates deployment and updates for edge applications. By overcoming the design limitations of current registries, the accelerator would reduce the deployment, start-up, and update times of container-based applications.
容器是一种新兴的边缘应用程序部署方法,因为它们是模块化的、轻量级的,并且易于用于开发和维护。然而,在边缘计算环境中部署容器带来了新的挑战:高延迟链路、有限的资源和用户移动性。这项工作提出了一种新的边缘部署架构,可以加速边缘应用程序的部署和更新。通过克服当前注册表的设计限制,加速器将减少基于容器的应用程序的部署、启动和更新时间。
{"title":"Poster: An Accelerator for Fast Container-based Applications Deployment on the Edge","authors":"Jun Lin Chen, D. Liaqat, Moshe Gabel, E. D. Lara","doi":"10.1109/SEC50012.2020.00027","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00027","url":null,"abstract":"Containers are an emerging approach for application deployment on the edge, as they are modular, lightweight, and easy to use for development and maintenance. However, deploying containers in an edge computing environment brings new challenges: high latency links, limited resources, and user mobility. This work proposes a new edge deployment architecture that accelerates deployment and updates for edge applications. By overcoming the design limitations of current registries, the accelerator would reduce the deployment, start-up, and update times of container-based applications.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130671208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Benchmarking the Accuracy of Algorithms for Memory-Constrained Image Classification 基于内存约束的图像分类算法的准确率基准测试
Pub Date : 2020-11-01 DOI: 10.1109/SEC50012.2020.00059
S. Müksch, Theo X. Olausson, John Wilhelm, Pavlos Andreadis
Convolutional Neural Networks, or CNNs, are the state of the art for image classification, but typically come at the cost of a large memory footprint. This limits their usefulness in edge computing applications, where memory is often a scarce resource. Recently, there has been significant progress in the field of image classification on such memory-constrained devices, with novel contributions like the ProtoNN, Bonsai and FastGRNN algorithms. These have been shown to reach up to 98.2% accuracy on optical character recognition using MNIST-10, with a memory footprint as little as 6KB. However, their potential on more complex multi-class and multi-channel image classification has yet to be determined. In this paper, we compare CNNs with ProtoNN, Bonsai and FastGRNN when applied to 3-channel image classification using CIFAR-10. For our analysis, we use the existing Direct Convolution algorithm to implement the CNNs memory-optimally and propose new methods of adjusting the FastGRNN model to work with multi-channel images. We extend the evaluation of each algorithm to a memory size budget of 8KB, 16KB, 32KB, 64KB and 128KB to show quantitatively that Direct Convolution CNNs perform best for all chosen budgets, with a top performance of 65.7% accuracy at a memory footprint of 58.23KB.
卷积神经网络(cnn)是图像分类的最新技术,但通常以占用大量内存为代价。这限制了它们在边缘计算应用程序中的实用性,在边缘计算应用程序中,内存通常是稀缺资源。最近,在这种内存受限设备上的图像分类领域取得了重大进展,有了新的贡献,如ProtoNN、Bonsai和FastGRNN算法。在使用mist -10的光学字符识别上,这些方法的准确率高达98.2%,内存占用仅为6KB。然而,它们在更复杂的多类、多通道图像分类上的潜力还有待确定。在本文中,我们将cnn与ProtoNN、Bonsai和FastGRNN在使用CIFAR-10进行三通道图像分类时进行了比较。在我们的分析中,我们使用现有的直接卷积算法来优化实现cnn的内存,并提出了调整FastGRNN模型以处理多通道图像的新方法。我们将每种算法的评估扩展到8KB, 16KB, 32KB, 64KB和128KB的内存大小预算,以定量地显示直接卷积cnn在所有选择的预算中表现最佳,在58.23KB的内存占用下,最高性能达到65.7%的准确率。
{"title":"Benchmarking the Accuracy of Algorithms for Memory-Constrained Image Classification","authors":"S. Müksch, Theo X. Olausson, John Wilhelm, Pavlos Andreadis","doi":"10.1109/SEC50012.2020.00059","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00059","url":null,"abstract":"Convolutional Neural Networks, or CNNs, are the state of the art for image classification, but typically come at the cost of a large memory footprint. This limits their usefulness in edge computing applications, where memory is often a scarce resource. Recently, there has been significant progress in the field of image classification on such memory-constrained devices, with novel contributions like the ProtoNN, Bonsai and FastGRNN algorithms. These have been shown to reach up to 98.2% accuracy on optical character recognition using MNIST-10, with a memory footprint as little as 6KB. However, their potential on more complex multi-class and multi-channel image classification has yet to be determined. In this paper, we compare CNNs with ProtoNN, Bonsai and FastGRNN when applied to 3-channel image classification using CIFAR-10. For our analysis, we use the existing Direct Convolution algorithm to implement the CNNs memory-optimally and propose new methods of adjusting the FastGRNN model to work with multi-channel images. We extend the evaluation of each algorithm to a memory size budget of 8KB, 16KB, 32KB, 64KB and 128KB to show quantitatively that Direct Convolution CNNs perform best for all chosen budgets, with a top performance of 65.7% accuracy at a memory footprint of 58.23KB.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121133438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
CloudSLAM: Edge Offloading of Stateful Vehicular Applications CloudSLAM:有状态车辆应用的边缘卸载
Pub Date : 2020-11-01 DOI: 10.1109/SEC50012.2020.00018
Kwame-Lante Wright, A. Sivakumar, P. Steenkiste, Bo Yu, F. Bai
Vehicular applications are becoming increasingly complex and resource hungry (e.g. autonomous driving). Today, they run entirely on the vehicle, which is a costly solution that also imposes undesirable resource constraints. This paper uses Simultaneous Localization and Mapping (SLAM) as an example application to explore how these applications can instead leverage edge clouds, utilizing their inexpensive and elastic resource pool. This is challenging as these applications are often latency-sensitive and mission-critical. They also process high-bandwidth sensor data streams and maintain large, complex data structures. As a result, traditional offloading techniques generate too much traffic, incurring high delay. To overcome these challenges, we designed CloudSLAM. It partitions SLAM between the vehicle and the edge. To manage the complex, replicated SLAM state, we propose a new consistency model, Output-driven Consistency, that allows us to maintain a level of consistency that is sufficient for accurate SLAM output while minimizing network traffic. This paper motivates and describes our offloading design and discusses the results of an extensive performance evaluation of a CloudSLAM prototype based on ORB-SLAM.
车辆应用正变得越来越复杂和资源匮乏(例如自动驾驶)。今天,它们完全在车辆上运行,这是一种昂贵的解决方案,同时也带来了不必要的资源限制。本文使用同步定位和映射(SLAM)作为示例应用程序来探索这些应用程序如何利用边缘云,利用其廉价和弹性的资源池。这是具有挑战性的,因为这些应用程序通常是延迟敏感和任务关键型的。它们还处理高带宽传感器数据流,并维护大型复杂的数据结构。因此,传统的卸载技术产生了过多的流量,造成了较高的时延。为了克服这些挑战,我们设计了CloudSLAM。它将SLAM划分在车辆和边缘之间。为了管理复杂的、复制的SLAM状态,我们提出了一个新的一致性模型,输出驱动的一致性,它允许我们在最小化网络流量的同时保持足够精确的SLAM输出的一致性水平。本文描述了我们的卸载设计,并讨论了基于ORB-SLAM的CloudSLAM原型的广泛性能评估结果。
{"title":"CloudSLAM: Edge Offloading of Stateful Vehicular Applications","authors":"Kwame-Lante Wright, A. Sivakumar, P. Steenkiste, Bo Yu, F. Bai","doi":"10.1109/SEC50012.2020.00018","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00018","url":null,"abstract":"Vehicular applications are becoming increasingly complex and resource hungry (e.g. autonomous driving). Today, they run entirely on the vehicle, which is a costly solution that also imposes undesirable resource constraints. This paper uses Simultaneous Localization and Mapping (SLAM) as an example application to explore how these applications can instead leverage edge clouds, utilizing their inexpensive and elastic resource pool. This is challenging as these applications are often latency-sensitive and mission-critical. They also process high-bandwidth sensor data streams and maintain large, complex data structures. As a result, traditional offloading techniques generate too much traffic, incurring high delay. To overcome these challenges, we designed CloudSLAM. It partitions SLAM between the vehicle and the edge. To manage the complex, replicated SLAM state, we propose a new consistency model, Output-driven Consistency, that allows us to maintain a level of consistency that is sufficient for accurate SLAM output while minimizing network traffic. This paper motivates and describes our offloading design and discusses the results of an extensive performance evaluation of a CloudSLAM prototype based on ORB-SLAM.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"159 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128084615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Elasticity Control for Latency-Intolerant Mobile Edge Applications 延迟不容忍移动边缘应用的弹性控制
Pub Date : 2020-11-01 DOI: 10.1109/SEC50012.2020.00013
Chanh Nguyen, C. Klein, E. Elmroth
Elasticity is a fundamental property required for Mobile Edge Clouds (MECs) to become mature computing platforms hosting software applications. However, MECs must cope with several challenges that do not arise in the context of conventional cloud platforms. These include the potentially highly distributed geographical deployment, heterogeneity, and limited resource capacity of Edge Data Centers (EDCs), and end-user mobility.In this paper, we present an elasticity controller to help MECs overcome these challenges by automatic proactive resource scaling. The controller utilizes information on the physical locations of EDCs and the correlation of workload changes in physically neighboring EDCs to predict request arrival rates at EDCs. These predictions are used as inputs for a queueing theory-driven performance model that estimates the number of resources that should be provisioned to EDCs in order to meet predefined Service Level Objectives (SLOs) while maximizing resource utilization. The controller also incorporates a grouplevel load balancer that is responsible for redirecting requests among EDCs during runtime so as to minimize the request rejection rate. We evaluate our approach by performing simulations with an emulated MEC deployed over a metropolitan area and a simulated application workload using a real-world user mobility trace. The results show that our proposed pro-active controller exhibits better scaling behavior than a state-of-the-art re-active controller and increases the efficiency of resource provisioning, thereby helping MECs to sustain resource utilization and rejection rates that satisfy predefined SLOs while maintaining system stability.
弹性是移动边缘云(mec)成为托管软件应用程序的成熟计算平台所需的基本属性。然而,mec必须应对传统云平台背景下不会出现的一些挑战。这些问题包括边缘数据中心(EDCs)潜在的高度分布式地理部署、异构性和有限的资源容量,以及最终用户的移动性。在本文中,我们提出了一个弹性控制器,以帮助mec克服这些挑战,通过自动主动资源缩放。控制器利用edc的物理位置信息和物理相邻edc中工作负载变化的相关性来预测edc的请求到达率。这些预测用作排队理论驱动的性能模型的输入,该模型估计应该提供给edc的资源数量,以便在最大化资源利用率的同时满足预定义的服务水平目标(Service Level Objectives, slo)。控制器还包含一个组级负载平衡器,负责在运行时在edc之间重定向请求,以最小化请求拒绝率。我们通过使用部署在大都市地区的模拟MEC和使用真实用户移动跟踪的模拟应用程序工作负载进行模拟来评估我们的方法。结果表明,我们提出的主动控制器比最先进的被动控制器具有更好的缩放行为,并提高了资源配置的效率,从而帮助mec在保持系统稳定性的同时保持满足预定义SLOs的资源利用率和拒绝率。
{"title":"Elasticity Control for Latency-Intolerant Mobile Edge Applications","authors":"Chanh Nguyen, C. Klein, E. Elmroth","doi":"10.1109/SEC50012.2020.00013","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00013","url":null,"abstract":"Elasticity is a fundamental property required for Mobile Edge Clouds (MECs) to become mature computing platforms hosting software applications. However, MECs must cope with several challenges that do not arise in the context of conventional cloud platforms. These include the potentially highly distributed geographical deployment, heterogeneity, and limited resource capacity of Edge Data Centers (EDCs), and end-user mobility.In this paper, we present an elasticity controller to help MECs overcome these challenges by automatic proactive resource scaling. The controller utilizes information on the physical locations of EDCs and the correlation of workload changes in physically neighboring EDCs to predict request arrival rates at EDCs. These predictions are used as inputs for a queueing theory-driven performance model that estimates the number of resources that should be provisioned to EDCs in order to meet predefined Service Level Objectives (SLOs) while maximizing resource utilization. The controller also incorporates a grouplevel load balancer that is responsible for redirecting requests among EDCs during runtime so as to minimize the request rejection rate. We evaluate our approach by performing simulations with an emulated MEC deployed over a metropolitan area and a simulated application workload using a real-world user mobility trace. The results show that our proposed pro-active controller exhibits better scaling behavior than a state-of-the-art re-active controller and increases the efficiency of resource provisioning, thereby helping MECs to sustain resource utilization and rejection rates that satisfy predefined SLOs while maintaining system stability.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117164376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Feather: Hierarchical Querying for the Edge 羽毛:边缘的分层查询
Pub Date : 2020-11-01 DOI: 10.1109/SEC50012.2020.00039
S. H. Mortazavi, Mohammad Salehe, Moshe Gabel, E. D. Lara
In many edge computing scenarios data is generated over a wide geographic area and is stored near the edges, before being pushed upstream to a hierarchy of data centers. Querying such geo-distributed data traditionally falls into two general approaches: push incoming queries down to the edge where the data is, or run them locally in the cloud.Feather is a hybrid querying scheme that exploits the hierarchical structure of such geo-distributed systems to trade temporal accuracy (freshness) for improved latency and reduced bandwidth. Rather than pushing queries to the edge or executing them in the cloud, Feather selectively pushes queries towards the edge while guaranteeing a user-supplied per-query freshness limit. Partial results are then aggregated along the path to the cloud, until a final result is provided with guaranteed freshness.We evaluate Feather in controlled experiments using real-world geo-tagged traces, as well as a real system running across 10 datacenters in 3 continents. Feather combines the best of cloud and edge execution, answering queries with a fraction of edge latency, providing fresher answers than cloud, while reducing network bandwidth and load on edges.
在许多边缘计算场景中,数据是在广泛的地理区域内生成的,并存储在边缘附近,然后再向上游推送到数据中心层次结构。传统上,查询这种地理分布式数据有两种一般的方法:将传入的查询推到数据所在的边缘,或者在云中本地运行它们。Feather是一种混合查询方案,它利用这种地理分布式系统的层次结构,以时间准确性(新鲜度)换取改进的延迟和减少的带宽。而不是将查询推到边缘或在云中执行,Feather有选择地将查询推到边缘,同时保证用户提供的每个查询新鲜度限制。然后,沿着通往云的路径聚合部分结果,直到提供保证新鲜度的最终结果。我们在对照实验中使用真实世界的地理标记痕迹,以及在3大洲的10个数据中心运行的真实系统来评估Feather。Feather结合了云和边缘执行的优点,以极小的边缘延迟回答查询,提供比云更新鲜的答案,同时减少了网络带宽和边缘负载。
{"title":"Feather: Hierarchical Querying for the Edge","authors":"S. H. Mortazavi, Mohammad Salehe, Moshe Gabel, E. D. Lara","doi":"10.1109/SEC50012.2020.00039","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00039","url":null,"abstract":"In many edge computing scenarios data is generated over a wide geographic area and is stored near the edges, before being pushed upstream to a hierarchy of data centers. Querying such geo-distributed data traditionally falls into two general approaches: push incoming queries down to the edge where the data is, or run them locally in the cloud.Feather is a hybrid querying scheme that exploits the hierarchical structure of such geo-distributed systems to trade temporal accuracy (freshness) for improved latency and reduced bandwidth. Rather than pushing queries to the edge or executing them in the cloud, Feather selectively pushes queries towards the edge while guaranteeing a user-supplied per-query freshness limit. Partial results are then aggregated along the path to the cloud, until a final result is provided with guaranteed freshness.We evaluate Feather in controlled experiments using real-world geo-tagged traces, as well as a real system running across 10 datacenters in 3 continents. Feather combines the best of cloud and edge execution, answering queries with a fraction of edge latency, providing fresher answers than cloud, while reducing network bandwidth and load on edges.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124688773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
2020 IEEE/ACM Symposium on Edge Computing (SEC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1