首页 > 最新文献

Proceedings of the Second ACM/IEEE Symposium on Edge Computing最新文献

英文 中文
CIRCE - a runtime scheduler for DAG-based dispersed computing: demo 基于dag的分布式计算的运行时调度程序:演示
Pub Date : 2017-10-12 DOI: 10.1145/3132211.3132451
Aleksandra Knezevic, Quynh Nguyen, Jason A. Tran, Pradipta Ghosh, Pranav Sakulkar, B. Krishnamachari, M. Annavaram
CIRCE (Centralized Runtime sChedulEr) is a runtime scheduling software tool for dispersed computing. It can deploy pipelined computations described in the form of a Directed Acyclic Graph (DAG) on multiple geographically dispersed compute nodes at the edge and in the cloud. A key innovation in this scheduler compared to prior work is the incorporation of a run-time network profiler which accounts for the network performance among nodes when scheduling. This demo will show an implementation of CIRCE deployed on a testbed of tens of nodes, from both an edge computing testbed and a geographically distributed cloud, with real-time evaluation of the task processing performance of different scheduling algorithms.
CIRCE (Centralized Runtime sChedulEr)是一种用于分散计算的运行时调度软件工具。它可以在边缘和云中的多个地理上分散的计算节点上部署以有向无环图(DAG)形式描述的流水线计算。与之前的工作相比,这个调度器的一个关键创新是合并了一个运行时网络分析器,该分析器在调度时考虑节点之间的网络性能。本演示将展示部署在数十个节点的测试平台上的CIRCE实现,这些节点来自边缘计算测试平台和地理分布式云,并实时评估不同调度算法的任务处理性能。
{"title":"CIRCE - a runtime scheduler for DAG-based dispersed computing: demo","authors":"Aleksandra Knezevic, Quynh Nguyen, Jason A. Tran, Pradipta Ghosh, Pranav Sakulkar, B. Krishnamachari, M. Annavaram","doi":"10.1145/3132211.3132451","DOIUrl":"https://doi.org/10.1145/3132211.3132451","url":null,"abstract":"CIRCE (Centralized Runtime sChedulEr) is a runtime scheduling software tool for dispersed computing. It can deploy pipelined computations described in the form of a Directed Acyclic Graph (DAG) on multiple geographically dispersed compute nodes at the edge and in the cloud. A key innovation in this scheduler compared to prior work is the incorporation of a run-time network profiler which accounts for the network performance among nodes when scheduling. This demo will show an implementation of CIRCE deployed on a testbed of tens of nodes, from both an edge computing testbed and a geographically distributed cloud, with real-time evaluation of the task processing performance of different scheduling algorithms.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126738217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Efficient service handoff across edge servers via docker container migration 通过docker容器迁移实现跨边缘服务器的高效服务切换
Pub Date : 2017-10-12 DOI: 10.1145/3132211.3134460
Lele Ma, Shanhe Yi, Qun A. Li
Supporting smooth movement of mobile clients is important when offloading services on an edge computing platform. Interruption-free client mobility demands seamless migration of the offloading service to nearby edge servers. However, fast migration of offloading services across edge servers in a WAN environment poses significant challenges to the handoff service design. In this paper, we present a novel service handoff system which seamlessly migrates offloading services to the nearest edge server, while the mobile client is moving. Service handoff is achieved via container migration. We identify an important performance problem during Docker container migration. Based on our systematic study of container layer management and image stacking, we propose a migration method which leverages the layered storage system to reduce file system synchronization overhead, without dependence on the distributed file system. We implement a prototype system and conduct experiments using real world product applications. Evaluation results reveal that compared to state-of-the-art service handoff systems designed for edge computing platforms, our system reduces the total duration of service handoff time by 80%(56%) with network bandwidth 5Mbps(20Mbps).
在边缘计算平台上卸载服务时,支持移动客户端的平滑移动非常重要。无中断的客户端移动性要求将卸载服务无缝迁移到附近的边缘服务器。然而,在WAN环境中,跨边缘服务器的卸载服务的快速迁移对切换服务设计提出了重大挑战。在本文中,我们提出了一种新颖的服务切换系统,该系统可以在移动客户端移动时无缝地将卸载服务迁移到最近的边缘服务器。服务切换是通过容器迁移实现的。我们在Docker容器迁移过程中发现了一个重要的性能问题。在系统研究容器层管理和镜像堆叠的基础上,提出了一种不依赖分布式文件系统,利用分层存储系统减少文件系统同步开销的迁移方法。我们实现了一个原型系统,并使用真实世界的产品应用程序进行实验。评估结果显示,与为边缘计算平台设计的最先进的服务切换系统相比,我们的系统在网络带宽为5Mbps(20Mbps)的情况下,将服务切换总持续时间缩短了80%(56%)。
{"title":"Efficient service handoff across edge servers via docker container migration","authors":"Lele Ma, Shanhe Yi, Qun A. Li","doi":"10.1145/3132211.3134460","DOIUrl":"https://doi.org/10.1145/3132211.3134460","url":null,"abstract":"Supporting smooth movement of mobile clients is important when offloading services on an edge computing platform. Interruption-free client mobility demands seamless migration of the offloading service to nearby edge servers. However, fast migration of offloading services across edge servers in a WAN environment poses significant challenges to the handoff service design. In this paper, we present a novel service handoff system which seamlessly migrates offloading services to the nearest edge server, while the mobile client is moving. Service handoff is achieved via container migration. We identify an important performance problem during Docker container migration. Based on our systematic study of container layer management and image stacking, we propose a migration method which leverages the layered storage system to reduce file system synchronization overhead, without dependence on the distributed file system. We implement a prototype system and conduct experiments using real world product applications. Evaluation results reveal that compared to state-of-the-art service handoff systems designed for edge computing platforms, our system reduces the total duration of service handoff time by 80%(56%) with network bandwidth 5Mbps(20Mbps).","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125859690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 148
You can teach elephants to dance: agile VM handoff for edge computing 你可以教大象跳舞:边缘计算的敏捷VM切换
Pub Date : 2017-10-12 DOI: 10.1145/3132211.3134453
Kiryong Ha, Yoshihisa Abe, Thomas Eiszler, Zhuo Chen, Wenlu Hu, Brandon Amos, Rohit Upadhyaya, P. Pillai, M. Satyanarayanan
VM handoff enables rapid and transparent placement changes to executing code in edge computing use cases where the safety and management attributes of VM encapsulation are important. This versatile primitive offers the functionality of classic live migration but is highly optimized for the edge. Over WAN bandwidths ranging from 5 to 25 Mbps, VM handoff migrates a running 8 GB VM in about a minute, with a downtime of a few tens of seconds. By dynamically adapting to varying network bandwidth and processing load, VM handoff is more than an order of magnitude faster than live migration at those bandwidths.
在边缘计算用例中,VM切换可以快速透明地更改执行代码的位置,在这些用例中,VM封装的安全性和管理属性非常重要。这个通用原语提供了经典的实时迁移功能,但对边缘进行了高度优化。在广域网带宽范围从5到25 Mbps的情况下,虚拟机切换在大约一分钟内迁移一个运行中的8gb虚拟机,停机时间为几十秒。通过动态适应不同的网络带宽和处理负载,VM切换比这些带宽下的实时迁移快一个数量级以上。
{"title":"You can teach elephants to dance: agile VM handoff for edge computing","authors":"Kiryong Ha, Yoshihisa Abe, Thomas Eiszler, Zhuo Chen, Wenlu Hu, Brandon Amos, Rohit Upadhyaya, P. Pillai, M. Satyanarayanan","doi":"10.1145/3132211.3134453","DOIUrl":"https://doi.org/10.1145/3132211.3134453","url":null,"abstract":"VM handoff enables rapid and transparent placement changes to executing code in edge computing use cases where the safety and management attributes of VM encapsulation are important. This versatile primitive offers the functionality of classic live migration but is highly optimized for the edge. Over WAN bandwidths ranging from 5 to 25 Mbps, VM handoff migrates a running 8 GB VM in about a minute, with a downtime of a few tens of seconds. By dynamically adapting to varying network bandwidth and processing load, VM handoff is more than an order of magnitude faster than live migration at those bandwidths.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123457961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 87
An energy-efficient offloading framework with predictable temporal correctness 具有可预测时间正确性的节能卸载框架
Pub Date : 2017-10-12 DOI: 10.1145/3132211.3134448
Zheng Dong, Yuchuan Liu, Husheng Zhou, Xusheng Xiao, Y. Gu, Lingming Zhang, Cong Liu
As battery-powered embedded devices have limited computational capacity, computation offloading becomes a promising solution that selectively migrates computations to powerful remote severs. The driving problem that motivates this work is to leverage remote resources to facilitate the development of mobile augmented reality (AR) systems. Due to the (soft) timing predictability requirements of many AR-based computations (e.g., object recognition tasks require bounded response times), it is challenging to develop an offloading framework that jointly optimizes the two (somewhat conflicting) goals of achieving timing predictability and energy efficiency. This paper presents a comprehensive offloading and resource management framework for embedded systems, which aims to ensure predictable response time performance while minimizing energy consumption. We develop two offloading algorithms within the framework, which decide the task components that shall be offloaded so that both goals can be achieved simultaneously. We have fully implemented our framework on an Android smartphone platform. An in-depth evaluation using representative Android applications and benchmarks demonstrates that our proposed offloading framework dominates existing approaches in term of timing predictability (e.g., ours can support workloads with 100% more required CPU utilization), while effectively reducing energy consumption.
由于电池供电的嵌入式设备的计算能力有限,计算卸载成为一种有前途的解决方案,它可以选择性地将计算迁移到功能强大的远程服务器上。推动这项工作的驱动问题是利用远程资源来促进移动增强现实(AR)系统的开发。由于许多基于ar的计算的(软)时间可预测性要求(例如,对象识别任务需要有限的响应时间),开发一个卸载框架来联合优化实现时间可预测性和能源效率这两个(有些冲突的)目标是具有挑战性的。本文提出了一个全面的嵌入式系统卸载和资源管理框架,旨在确保可预测的响应时间性能,同时最大限度地减少能源消耗。我们在框架内开发了两种卸载算法,它们决定了需要卸载的任务组件,从而可以同时实现两个目标。我们已经在Android智能手机平台上完全实现了我们的框架。使用代表性Android应用程序和基准测试的深入评估表明,我们提出的卸载框架在时间可预测性方面优于现有方法(例如,我们的框架可以支持100%以上所需CPU利用率的工作负载),同时有效地降低了能耗。
{"title":"An energy-efficient offloading framework with predictable temporal correctness","authors":"Zheng Dong, Yuchuan Liu, Husheng Zhou, Xusheng Xiao, Y. Gu, Lingming Zhang, Cong Liu","doi":"10.1145/3132211.3134448","DOIUrl":"https://doi.org/10.1145/3132211.3134448","url":null,"abstract":"As battery-powered embedded devices have limited computational capacity, computation offloading becomes a promising solution that selectively migrates computations to powerful remote severs. The driving problem that motivates this work is to leverage remote resources to facilitate the development of mobile augmented reality (AR) systems. Due to the (soft) timing predictability requirements of many AR-based computations (e.g., object recognition tasks require bounded response times), it is challenging to develop an offloading framework that jointly optimizes the two (somewhat conflicting) goals of achieving timing predictability and energy efficiency. This paper presents a comprehensive offloading and resource management framework for embedded systems, which aims to ensure predictable response time performance while minimizing energy consumption. We develop two offloading algorithms within the framework, which decide the task components that shall be offloaded so that both goals can be achieved simultaneously. We have fully implemented our framework on an Android smartphone platform. An in-depth evaluation using representative Android applications and benchmarks demonstrates that our proposed offloading framework dominates existing approaches in term of timing predictability (e.g., ours can support workloads with 100% more required CPU utilization), while effectively reducing energy consumption.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"236 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124761961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Cloudpath: a multi-tier cloud computing framework Cloudpath:一个多层云计算框架
Pub Date : 2017-10-12 DOI: 10.1145/3132211.3134464
S. H. Mortazavi, Mohammad Salehe, C. S. Gomes, Caleb Phillips, E. D. Lara
Path computing is a new paradigm that generalizes the edge computing vision into a multi-tier cloud architecture deployed over the geographic span of the network. Path computing supports scalable and localized processing by providing storage and computation along a succession of datacenters of increasing sizes, positioned between the client device and the traditional wide-area cloud data-center. CloudPath is a platform that implements the path computing paradigm. CloudPath consists of an execution environment that enables the dynamic installation of light-weight stateless event handlers, and a distributed eventual consistent storage system that replicates application data on-demand. CloudPath handlers are small, allowing them to be rapidly instantiated on demand on any server that runs the CloudPath execution framework. In turn, CloudPath automatically migrates application data across the multiple datacenter tiers to optimize access latency and reduce bandwidth consumption.
路径计算是一种新的范式,它将边缘计算视觉推广到部署在网络地理跨度上的多层云架构中。路径计算通过在客户端设备和传统广域云数据中心之间的一系列不断扩大的数据中心上提供存储和计算,支持可扩展和本地化的处理。CloudPath是一个实现路径计算范式的平台。CloudPath包括一个支持动态安装轻量级无状态事件处理程序的执行环境,以及一个按需复制应用程序数据的分布式最终一致存储系统。CloudPath处理程序很小,允许它们在运行CloudPath执行框架的任何服务器上按需快速实例化。然后,CloudPath在多个数据中心层之间自动迁移应用程序数据,以优化访问延迟并减少带宽消耗。
{"title":"Cloudpath: a multi-tier cloud computing framework","authors":"S. H. Mortazavi, Mohammad Salehe, C. S. Gomes, Caleb Phillips, E. D. Lara","doi":"10.1145/3132211.3134464","DOIUrl":"https://doi.org/10.1145/3132211.3134464","url":null,"abstract":"Path computing is a new paradigm that generalizes the edge computing vision into a multi-tier cloud architecture deployed over the geographic span of the network. Path computing supports scalable and localized processing by providing storage and computation along a succession of datacenters of increasing sizes, positioned between the client device and the traditional wide-area cloud data-center. CloudPath is a platform that implements the path computing paradigm. CloudPath consists of an execution environment that enables the dynamic installation of light-weight stateless event handlers, and a distributed eventual consistent storage system that replicates application data on-demand. CloudPath handlers are small, allowing them to be rapidly instantiated on demand on any server that runs the CloudPath execution framework. In turn, CloudPath automatically migrates application data across the multiple datacenter tiers to optimize access latency and reduce bandwidth consumption.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127611054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 99
Proceedings of the Second ACM/IEEE Symposium on Edge Computing 第二届ACM/IEEE边缘计算研讨会论文集
Pub Date : 2017-10-12 DOI: 10.1145/3132211
Junshan Zhang, M. Chiang, B. Maggs
SEC is a premier symposium with a highly selective single-track technical program, dedicated to addressing the challenges in edge computing. SEC is orchestrated to provide a unique platform for researchers and practitioners to exchange ideas and demonstrate the most recent advances in research and development on edge computing.
SEC是一个具有高度选择性的单轨技术计划的顶级研讨会,致力于解决边缘计算方面的挑战。SEC旨在为研究人员和从业者提供一个独特的平台,以交流思想,并展示边缘计算研究和开发的最新进展。
{"title":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","authors":"Junshan Zhang, M. Chiang, B. Maggs","doi":"10.1145/3132211","DOIUrl":"https://doi.org/10.1145/3132211","url":null,"abstract":"SEC is a premier symposium with a highly selective single-track technical program, dedicated to addressing the challenges in edge computing. SEC is orchestrated to provide a unique platform for researchers and practitioners to exchange ideas and demonstrate the most recent advances in research and development on edge computing.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131084798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An edge-facilitated message broker for scalable device discovery: poster 用于可扩展设备发现的边缘便利消息代理:海报
Pub Date : 2017-10-12 DOI: 10.1145/3132211.3132456
Zhe Huang, Bharath Balasubramanian, Azzam Alsudais, Kaustubh R. Joshi
Searching for a particular device in an ocean of devices is a perfect illustration of the idiom 'searching a needle in a haystack'. Yet the future IoT and edge computing platforms are facing an even more challenging problem because their mission-critical operations (e.g., application orchestration, device and application telemetry, inventory management) depend on their capability of identifying nodes of interest from potentially millions of service providers across the globe according to highly dynamic attributes such as geo-location information, bandwidth availability, real-time workload and so on. For example, a vehicular-based crowd sensing application that collects air quality data near an exit of a highway needs to locate cars in close proximity to the exit among millions of cars running on the road. In a business model where an enterprise offers a framework for clients to avail such edge/IoT services, we investigate the following problem: "among millions of IoT/Edge nodes, how do we locate and communicate with only those nodes that satisfy certain attributes, especially when some of these attributes change rapidly?" In this paper, we address this problem through the design of a scalable message broker based on the following novel intuition: device discovery should be a joint effort between a centrally managed enterprise-level system (high availability, low accuracy) and the fully decentralized edge (high accuracy, unpredictable availability). To elaborate, the enterprise can centrally maintain and manage the attributes of all the IoT devices. However, since millions of devices cannot constantly update their attribute information, central management has the issue of attribute staleness. Clearly the devices themselves have the most up-to-date information. However, it is not feasible for every request to be routed to million devices connected by unpredictable networks, where only some of them may possess the correct attributes. In this paper, we propose a message broker, in which requests for relatively static device attributes are handled by the centrally managed system, whereas, requests for dynamic attributes are handled by peer-to-peer networks of the edge devices containing those attributes. This combination provides a scalable solution wherein, based on client needs, we can obtain attribute values without compromising on freshness or performance. There exist several previous works that aim to tackle the device searching problem. Name-based networking solutions such as Intentional Naming System (INS) [1], Auspice [5], and global name service [3] propose to implement a centrally managed name resolution service. Devices periodically update their status information and descriptions in a push approach. While maintaining complete knowledge of every device in the network centrally makes the searching much easier, the excessive workload from millions of devices updating their status in a highly dynamic environment renders the scheme unsaleable.
在设备的海洋中搜索特定的设备是“大海捞针”这个成语的完美例证。然而,未来的物联网和边缘计算平台正面临着一个更具挑战性的问题,因为它们的关键任务操作(例如,应用程序编排、设备和应用程序遥测、库存管理)依赖于它们根据地理位置信息、带宽可用性、实时工作负载等高度动态属性,从全球数百万潜在服务提供商中识别感兴趣节点的能力。例如,一个基于车辆的人群传感应用程序收集高速公路出口附近的空气质量数据,需要在道路上行驶的数百万辆汽车中定位靠近出口的汽车。在企业为客户提供框架以利用此类边缘/物联网服务的商业模式中,我们研究了以下问题:“在数百万个物联网/边缘节点中,我们如何定位并仅与满足某些属性的节点进行通信,特别是当其中一些属性快速变化时?”在本文中,我们通过基于以下新颖直觉的可伸缩消息代理的设计来解决这个问题:设备发现应该是集中管理的企业级系统(高可用性,低准确性)和完全分散的边缘(高精度,不可预测的可用性)之间的共同努力。具体来说,企业可以对所有物联网设备的属性进行集中维护和管理。然而,由于数以百万计的设备不能不断更新其属性信息,因此集中管理存在属性过时的问题。显然,这些设备本身拥有最新的信息。然而,将每个请求路由到由不可预测的网络连接的数百万个设备是不可行的,其中只有一些设备可能具有正确的属性。在本文中,我们提出了一个消息代理,其中对相对静态设备属性的请求由集中管理的系统处理,而对动态属性的请求由包含这些属性的边缘设备的对等网络处理。这种组合提供了一个可扩展的解决方案,根据客户端的需求,我们可以在不影响新鲜度或性能的情况下获得属性值。已有一些先前的工作旨在解决设备搜索问题。基于名称的网络解决方案,如意向命名系统(international Naming System, INS)[1]、祥和[5]和全局名称服务[3],建议实现集中管理的名称解析服务。设备以推送的方式定期更新其状态信息和描述。虽然集中维护网络中每个设备的完整信息使搜索变得更加容易,但在高度动态的环境中,数百万台设备更新其状态的过度工作量使该方案无法销售。另一方面,基于拉的解决方案,如Geocast[2],通过将设备搜索查询转发给设备,并依靠设备在其状态和属性与查询匹配时自动标识自己,完全消除了状态更新工作负载。然而,基于拉的解决方案需要一个属性感知的消息路由方案,例如分布式哈希表[4],它确切地知道如何到达可能匹配查询的设备。这样的设计还面临着由于查询转发而导致的更长的查询响应延迟,以及增加的安全风险,因为他们相信设备会诚实地报告他们的身份和属性。更好的解决方案应该能够结合推拉设计原则的优势。基于目标边缘/物联网环境和应用程序,我们确定了能够支持大规模,高度动态网络环境的消息代理的以下设计目标:消息代理必须能够根据任意属性、服务描述和查询来识别和访问设备。•可验证性。消息代理必须能够使用权威信息源验证属性和描述。•可伸缩性。消息代理必须能够以最小的基础设施成本支持大规模部署。•及时性。当用户请求时,消息代理必须根据最新的属性和设备状态标识设备。•包容。消息代理必须返回一个设备列表,其中包含与接收到的查询匹配的所有活动设备。•鲁棒性。消息代理必须能够适应服务故障和高网络波动。在我们的设计中,可搜索性允许消息代理具有表现力,以便应用程序和设备可以声明它们自己的属性键和值。 消息代理允许设备查询包含定制的设备搜索逻辑,以便应用程序在定义如何识别感兴趣的相应节点方面具有极大的灵活性。在声明的属性键和值中,应用程序还可以自由声明谁是每个属性的权威信息源。只有权威信息源才能访问和修改相应设备的某些属性字段。这种可验证性可以有效防止身份欺骗、窃听等恶意攻击。为了实现可伸缩性,消息代理将设备状态上传工作负载卸载到终端设备。一些选定的终端设备将接收来自其他设备的状态更新。通过维护这些代表性设备的列表,消息代理服务能够在需要时获取最新的设备状态。通过限制消息交换的范围,消息代理可以有效地抵消频繁更新动态属性的额外工作负载。此机制允许消息代理为具有不同时效性需求的应用程序提供多粒度属性更新通道。诸如设备关联之类的静态属性可以通过全局通道更新,而诸如地理位置之类的动态属性将在较小的范围内交换。在收集实时变化属性的极端情况下,设备不再与其他设备交换/更新其属性和状态信息。在兴趣节点之间建立通信通道,按需直接提取状态和属性。在我们的设计中,通过规范和管理设备之间的消息交换的强语义来实现包容性和鲁棒性。包容性保证了应用程序可以通过消息代理到达每个感兴趣的活动节点。它为应用程序提供了全球可用服务和资源的完整视图。每个组件都被设计和实现为一个分布式系统,它可以容忍一定程度的故障。更重要的是,它们的设计是自我可持续的,这样它们就不依赖于彼此来正常工作。图1显示了我们提议的消息代理系统的体系结构,我们将其称为EF-broker。EF-broker主要提供三种服务:(1)设备发现和库存管理(DDIM);(2)动态组管理(DGM);(3)通信通道编排引擎(CCOE)。DDIM实现为集中管理的地理分布式簿记服务,它维护所有活动设备的全局视图。它作为新到达设备的集合点,同时它还通过要求设备以低频率更新其属性和状态(作为心跳信号)来维护设备的可用性。这样的全局属性更新通道以最终一致的方式将状态更新消息传播到地理分布的DDIM服务器。DDIM能够有效地回答依赖于相对静态属性值的设备查询。为了跟踪其值经常变化的动态设备属性,对于每个动态属性键,将创建一个对等的设备集群,我们将其称为动态组(DG)。同一DG中的设备使用八卦协议以更高的频率交换属性和状态信息。由于八卦协议,每个设备都保持同一组中其他成员的新属性和状态。选择有代表性的设备作为每个DG的入口点。代表节点以强有力的一致方式分发和维护总干事的成员名单,以实现包容性。引入集中管理的地理分布式动态组管理(DGM)服务,对大量dg的生命周期进行管理。它负责创建,终止,分裂,合并dg,以及维护和修复dg的入口点。DGM服务通过将设备查询转发到适当dg的入口点来提供更细粒度的属性更新通道。最后,EF-broker还能够在dg中的设备之间按需创建pub-sub通道,以便应用程序可以直接从感兴趣的节点中提取实时属性和状态。引入CCOE来管理发布-子通道的生命周期。
{"title":"An edge-facilitated message broker for scalable device discovery: poster","authors":"Zhe Huang, Bharath Balasubramanian, Azzam Alsudais, Kaustubh R. Joshi","doi":"10.1145/3132211.3132456","DOIUrl":"https://doi.org/10.1145/3132211.3132456","url":null,"abstract":"Searching for a particular device in an ocean of devices is a perfect illustration of the idiom 'searching a needle in a haystack'. Yet the future IoT and edge computing platforms are facing an even more challenging problem because their mission-critical operations (e.g., application orchestration, device and application telemetry, inventory management) depend on their capability of identifying nodes of interest from potentially millions of service providers across the globe according to highly dynamic attributes such as geo-location information, bandwidth availability, real-time workload and so on. For example, a vehicular-based crowd sensing application that collects air quality data near an exit of a highway needs to locate cars in close proximity to the exit among millions of cars running on the road. In a business model where an enterprise offers a framework for clients to avail such edge/IoT services, we investigate the following problem: \"among millions of IoT/Edge nodes, how do we locate and communicate with only those nodes that satisfy certain attributes, especially when some of these attributes change rapidly?\" In this paper, we address this problem through the design of a scalable message broker based on the following novel intuition: device discovery should be a joint effort between a centrally managed enterprise-level system (high availability, low accuracy) and the fully decentralized edge (high accuracy, unpredictable availability). To elaborate, the enterprise can centrally maintain and manage the attributes of all the IoT devices. However, since millions of devices cannot constantly update their attribute information, central management has the issue of attribute staleness. Clearly the devices themselves have the most up-to-date information. However, it is not feasible for every request to be routed to million devices connected by unpredictable networks, where only some of them may possess the correct attributes. In this paper, we propose a message broker, in which requests for relatively static device attributes are handled by the centrally managed system, whereas, requests for dynamic attributes are handled by peer-to-peer networks of the edge devices containing those attributes. This combination provides a scalable solution wherein, based on client needs, we can obtain attribute values without compromising on freshness or performance. There exist several previous works that aim to tackle the device searching problem. Name-based networking solutions such as Intentional Naming System (INS) [1], Auspice [5], and global name service [3] propose to implement a centrally managed name resolution service. Devices periodically update their status information and descriptions in a push approach. While maintaining complete knowledge of every device in the network centrally makes the searching much easier, the excessive workload from millions of devices updating their status in a highly dynamic environment renders the scheme unsaleable. ","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132472468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multi-stage stochastic programming for service placement in edge computing systems: poster 边缘计算系统中服务放置的多阶段随机规划:海报
Pub Date : 2017-10-12 DOI: 10.1145/3132211.3132461
H. Badri, Tayebeh Bahreini, Daniel Grosu, Kai Yang
Efficient service placement of mobile applications on the edge servers is one of the main challenges in Mobile Edge Computing (MEC). The service placement problem in MEC has to consider several issues that were not present in the data-center settings. After the initial service placement, mobile users may move to different locations which may increase the execution time or the cost of running the applications. In addition to this, the resource availability of servers may change over time. Therefore, an efficient service placement algorithm must be adaptive to this dynamic setting.
移动应用程序在边缘服务器上的高效服务放置是移动边缘计算(MEC)的主要挑战之一。MEC中的服务放置问题必须考虑数据中心设置中不存在的几个问题。在初始服务放置之后,移动用户可能会移动到不同的位置,这可能会增加执行时间或运行应用程序的成本。除此之外,服务器的资源可用性可能会随时间变化。因此,一个有效的服务放置算法必须适应这种动态设置。
{"title":"Multi-stage stochastic programming for service placement in edge computing systems: poster","authors":"H. Badri, Tayebeh Bahreini, Daniel Grosu, Kai Yang","doi":"10.1145/3132211.3132461","DOIUrl":"https://doi.org/10.1145/3132211.3132461","url":null,"abstract":"Efficient service placement of mobile applications on the edge servers is one of the main challenges in Mobile Edge Computing (MEC). The service placement problem in MEC has to consider several issues that were not present in the data-center settings. After the initial service placement, mobile users may move to different locations which may increase the execution time or the cost of running the applications. In addition to this, the resource availability of servers may change over time. Therefore, an efficient service placement algorithm must be adaptive to this dynamic setting.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124103400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
ePrivateeye: to the edge and beyond! 私人眼:到边缘和更远!
Pub Date : 2017-10-12 DOI: 10.1145/3132211.3134457
Christopher Streiffer, Animesh Srivastava, Victor Orlikowski, Yesenia Velasco, Vincentius Martin, Nisarg Raval, Ashwin Machanavajjhala, Landon P. Cox
Edge computing offers resource-constrained devices low-latency access to high-performance computing infrastructure. In this paper, we present ePrivateEye, an implementation of PrivateEye that offloads computationally expensive computer-vision processing to an edge server. The original PrivateEye locally processed video frames on a mobile device and delivered approximately 20 fps, whereas ePrivateEye transfers frames to a remote server for processing. We present experimental results that utilize our campus Software-Defined Networking infrastructure to characterize how network-path latency, packet loss, and geographic distance impact offloading to the edge in ePrivateEye. We show that offloading video-frame analysis to an edge server at a metro-scale distance allows ePrivateEye to analyze more frames than PrivateEye's local processing over the same period to achieve realtime performance of 30 fps, with perfect precision and negligible impact on energy efficiency.
边缘计算为资源受限的设备提供了对高性能计算基础设施的低延迟访问。在本文中,我们介绍了PrivateEye, PrivateEye的一种实现,它将计算昂贵的计算机视觉处理卸载到边缘服务器上。原始的PrivateEye在移动设备上本地处理视频帧,传输速度约为20fps,而epprivateeye将帧传输到远程服务器进行处理。我们展示了实验结果,利用我们的校园软件定义网络基础设施来表征网络路径延迟、数据包丢失和地理距离如何影响ePrivateEye中的边缘卸载。我们表明,将视频帧分析卸载到城域距离的边缘服务器上,可以使ePrivateEye在同一时间段内分析比PrivateEye本地处理更多的帧,从而实现30 fps的实时性能,具有完美的精度和对能源效率的影响可以忽略不计。
{"title":"ePrivateeye: to the edge and beyond!","authors":"Christopher Streiffer, Animesh Srivastava, Victor Orlikowski, Yesenia Velasco, Vincentius Martin, Nisarg Raval, Ashwin Machanavajjhala, Landon P. Cox","doi":"10.1145/3132211.3134457","DOIUrl":"https://doi.org/10.1145/3132211.3134457","url":null,"abstract":"Edge computing offers resource-constrained devices low-latency access to high-performance computing infrastructure. In this paper, we present ePrivateEye, an implementation of PrivateEye that offloads computationally expensive computer-vision processing to an edge server. The original PrivateEye locally processed video frames on a mobile device and delivered approximately 20 fps, whereas ePrivateEye transfers frames to a remote server for processing. We present experimental results that utilize our campus Software-Defined Networking infrastructure to characterize how network-path latency, packet loss, and geographic distance impact offloading to the edge in ePrivateEye. We show that offloading video-frame analysis to an edge server at a metro-scale distance allows ePrivateEye to analyze more frames than PrivateEye's local processing over the same period to achieve realtime performance of 30 fps, with perfect precision and negligible impact on energy efficiency.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121167451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Edgecourier: an edge-hosted personal service for low-bandwidth document synchronization in mobile cloud storage services Edgecourier:一个边缘托管的个人服务,用于移动云存储服务中的低带宽文档同步
Pub Date : 2017-10-12 DOI: 10.1145/3132211.3134447
Pengzhan Hao, Yongshu Bai, Xin Zhang, Yifan Zhang
Using cloud storage to automatically back up content changes when editing documents is an everyday scenario. We demonstrate that current cloud storage services can cause unnecessary bandwidth consumption, especially for office suite documents, in this common scenario. Specifically, even with incremental synchronization approach in place, existing cloud storage services still incur whole-file transmission every time when the document file is synchronized. We analyze the problem causes in depth, and propose EdgeCourier, a system to address the problem. We also propose the concept of edge-hosed personal service (EPS), which has many benefits, such as helping deploy EdgeCourier easily in practice. We have prototyped the EdgeCourier system, deployed it in the form of EPS in a lab environment, and performed extensive experiments for evaluation. Evaluation results suggest that our prototype system can effectively reduce document synchronization bandwidth with negligible overheads.
在编辑文档时使用云存储自动备份内容更改是一种日常场景。我们演示了在这个常见场景中,当前的云存储服务可能会导致不必要的带宽消耗,特别是对于办公套件文档。具体来说,即使采用了增量同步方法,现有的云存储服务在每次同步文档文件时仍然会导致整个文件的传输。深入分析了问题产生的原因,提出了解决问题的系统EdgeCourier。我们还提出了边缘软管个人服务(EPS)的概念,它有许多好处,例如帮助在实践中轻松部署EdgeCourier。我们制作了EdgeCourier系统的原型,将其以EPS的形式部署在实验室环境中,并进行了大量的实验以进行评估。评估结果表明,我们的原型系统可以有效地减少文档同步带宽,开销可以忽略不计。
{"title":"Edgecourier: an edge-hosted personal service for low-bandwidth document synchronization in mobile cloud storage services","authors":"Pengzhan Hao, Yongshu Bai, Xin Zhang, Yifan Zhang","doi":"10.1145/3132211.3134447","DOIUrl":"https://doi.org/10.1145/3132211.3134447","url":null,"abstract":"Using cloud storage to automatically back up content changes when editing documents is an everyday scenario. We demonstrate that current cloud storage services can cause unnecessary bandwidth consumption, especially for office suite documents, in this common scenario. Specifically, even with incremental synchronization approach in place, existing cloud storage services still incur whole-file transmission every time when the document file is synchronized. We analyze the problem causes in depth, and propose EdgeCourier, a system to address the problem. We also propose the concept of edge-hosed personal service (EPS), which has many benefits, such as helping deploy EdgeCourier easily in practice. We have prototyped the EdgeCourier system, deployed it in the form of EPS in a lab environment, and performed extensive experiments for evaluation. Evaluation results suggest that our prototype system can effectively reduce document synchronization bandwidth with negligible overheads.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"157 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127229361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
期刊
Proceedings of the Second ACM/IEEE Symposium on Edge Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1