首页 > 最新文献

Proceedings of the 2nd International Workshop on Edge Systems, Analytics and Networking最新文献

英文 中文
Optimized Assignment of Computational Tasks in Vehicular Micro Clouds 车载微云计算任务的优化分配
Ghaith Hattab, Seyhan Uçar, Takamasa Higuchi, O. Altintas, F. Dressler, D. Cabric
The ever-increasing advancements of vehicles have not only made them mobile devices with Internet connectivity, but also have pushed vehicles to become powerful computing resources. To this end, a cluster of vehicles can form a vehicular micro cloud, creating a virtual edge server and providing the computational resources needed for edge-based services. In this paper, we study the assignment of computational tasks among micro cloud vehicles of different computing resources. In particular, we formulate a bottleneck assignment problem, where the objective is to minimize the completion time of tasks assigned to available vehicles in the micro cloud. A two-stage algorithm, with polynomial-time complexity, is proposed to solve the problem. We use Monte Carlo simulations to validate the effectiveness of the proposed algorithm in two micro cloud scenarios: a parking structure and an intersection in Manhattan grid. It is shown that the algorithm significantly outperforms random assignment in completion time. For example, compared to the proposed algorithm, the completion time is 3.6x longer with random assignment when the number of cars is large, and it is 2.1x longer when the tasks have more varying requirements.
车辆的不断进步不仅使其成为具有互联网连接的移动设备,而且还推动车辆成为强大的计算资源。为此,一个车辆集群可以形成一个车辆微云,创建一个虚拟的边缘服务器,提供边缘服务所需的计算资源。本文研究了不同计算资源的微云车辆之间的计算任务分配问题。特别是,我们制定了瓶颈分配问题,其目标是最小化分配给微云中可用车辆的任务完成时间。提出了一种时间复杂度为多项式的两阶段算法。我们使用蒙特卡罗模拟来验证所提出算法在两个微云场景中的有效性:曼哈顿网格中的停车场结构和十字路口。结果表明,该算法在完成时间上明显优于随机分配。例如,与提出的算法相比,当车辆数量较大时,随机分配的完成时间要长3.6倍,当任务需求变化较大时,完成时间要长2.1倍。
{"title":"Optimized Assignment of Computational Tasks in Vehicular Micro Clouds","authors":"Ghaith Hattab, Seyhan Uçar, Takamasa Higuchi, O. Altintas, F. Dressler, D. Cabric","doi":"10.1145/3301418.3313937","DOIUrl":"https://doi.org/10.1145/3301418.3313937","url":null,"abstract":"The ever-increasing advancements of vehicles have not only made them mobile devices with Internet connectivity, but also have pushed vehicles to become powerful computing resources. To this end, a cluster of vehicles can form a vehicular micro cloud, creating a virtual edge server and providing the computational resources needed for edge-based services. In this paper, we study the assignment of computational tasks among micro cloud vehicles of different computing resources. In particular, we formulate a bottleneck assignment problem, where the objective is to minimize the completion time of tasks assigned to available vehicles in the micro cloud. A two-stage algorithm, with polynomial-time complexity, is proposed to solve the problem. We use Monte Carlo simulations to validate the effectiveness of the proposed algorithm in two micro cloud scenarios: a parking structure and an intersection in Manhattan grid. It is shown that the algorithm significantly outperforms random assignment in completion time. For example, compared to the proposed algorithm, the completion time is 3.6x longer with random assignment when the number of cars is large, and it is 2.1x longer when the tasks have more varying requirements.","PeriodicalId":131097,"journal":{"name":"Proceedings of the 2nd International Workshop on Edge Systems, Analytics and Networking","volume":"62 21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122647557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Transparent AR Processing Acceleration at the Edge 边缘透明AR处理加速
M. Trinelli, Massimo Gallo, M. Rifai, Fabio Pianese
Mobile devices are increasingly capable of supporting advanced functionalities but still face fundamental resource limitations. While the development of custom accelerators for compute-intensive functions is progressing, precious battery life and quality vs. latency trade-offs are limiting the potential of applications relying on processing real-time, computational-intensive functions, such as Augmented Reality. Transparent network support for on-the-fly media processing at the edge can significantly extend the capabilities of mobile devices without the need for API changes. In this paper we introduce NEAR, a framework for transparent live video processing and augmentation at the network edge, along with its architecture and preliminary performance evaluation in an object detection use case.
移动设备越来越能够支持高级功能,但仍然面临基本的资源限制。虽然用于计算密集型功能的定制加速器的开发正在取得进展,但宝贵的电池寿命和质量与延迟之间的权衡限制了依赖于处理实时计算密集型功能(如增强现实)的应用程序的潜力。对边缘动态媒体处理的透明网络支持可以显著扩展移动设备的功能,而无需更改API。在本文中,我们介绍了NEAR,一个用于网络边缘透明实时视频处理和增强的框架,以及它的架构和在目标检测用例中的初步性能评估。
{"title":"Transparent AR Processing Acceleration at the Edge","authors":"M. Trinelli, Massimo Gallo, M. Rifai, Fabio Pianese","doi":"10.1145/3301418.3313942","DOIUrl":"https://doi.org/10.1145/3301418.3313942","url":null,"abstract":"Mobile devices are increasingly capable of supporting advanced functionalities but still face fundamental resource limitations. While the development of custom accelerators for compute-intensive functions is progressing, precious battery life and quality vs. latency trade-offs are limiting the potential of applications relying on processing real-time, computational-intensive functions, such as Augmented Reality. Transparent network support for on-the-fly media processing at the edge can significantly extend the capabilities of mobile devices without the need for API changes. In this paper we introduce NEAR, a framework for transparent live video processing and augmentation at the network edge, along with its architecture and preliminary performance evaluation in an object detection use case.","PeriodicalId":131097,"journal":{"name":"Proceedings of the 2nd International Workshop on Edge Systems, Analytics and Networking","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114989289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Enabling Wireless Network Support for Gain Scheduled Control 使能无线网络支持增益计划控制
Sebastian Gallenmüller, René Glebke, Stephan M. Günther, Eric Hauser, Maurice Leclaire, S. Reif, Jan Rüth, Andreas Schmidt, G. Carle, T. Herfet, Wolfgang Schröder-Preikschat, Klaus Wehrle
To enable cooperation of cyber-physical systems in latency-critical scenarios, control algorithms are placed in edge systems communicating with sensors and actuators via wireless channels. The shift from wired towards wireless communication is accompanied by an inherent lack of predictability due to interference and mobility. The state of the art in distributed controller design is proactive in nature, modeling and predicting (and potentially oversimplifying) channel properties stochastically or pessimistically, i. e., worst-case considerations. In contrast, we present a system based on a real-time transport protocol that is aware of application-level constraints and applies run-time measurements for channel properties. Our run-time system utilizes this information to select appropriate controller instances, i. e., gain scheduling, that can handle the current conditions. We evaluate our system empirically in a wireless testbed employing a shielded environment to ensure reproducible channel conditions. A series of measurements demonstrates predictability of latency and potential limits for wireless networked control.
为了在延迟关键场景中实现网络物理系统的合作,控制算法被放置在边缘系统中,通过无线通道与传感器和执行器通信。从有线到无线通信的转变伴随着固有的可预测性的缺乏,因为干扰和移动性。分布式控制器设计的最新技术本质上是主动的,随机或悲观地建模和预测(并可能过度简化)通道特性,即最坏情况考虑。相比之下,我们提出了一个基于实时传输协议的系统,该协议意识到应用程序级别的约束,并对通道属性应用运行时测量。我们的运行时系统利用这些信息来选择适当的控制器实例,即增益调度,可以处理当前条件。我们在采用屏蔽环境的无线试验台中对系统进行了经验评估,以确保可再现的信道条件。一系列的测量证明了延迟的可预测性和无线网络控制的潜在限制。
{"title":"Enabling Wireless Network Support for Gain Scheduled Control","authors":"Sebastian Gallenmüller, René Glebke, Stephan M. Günther, Eric Hauser, Maurice Leclaire, S. Reif, Jan Rüth, Andreas Schmidt, G. Carle, T. Herfet, Wolfgang Schröder-Preikschat, Klaus Wehrle","doi":"10.1145/3301418.3313943","DOIUrl":"https://doi.org/10.1145/3301418.3313943","url":null,"abstract":"To enable cooperation of cyber-physical systems in latency-critical scenarios, control algorithms are placed in edge systems communicating with sensors and actuators via wireless channels. The shift from wired towards wireless communication is accompanied by an inherent lack of predictability due to interference and mobility. The state of the art in distributed controller design is proactive in nature, modeling and predicting (and potentially oversimplifying) channel properties stochastically or pessimistically, i. e., worst-case considerations. In contrast, we present a system based on a real-time transport protocol that is aware of application-level constraints and applies run-time measurements for channel properties. Our run-time system utilizes this information to select appropriate controller instances, i. e., gain scheduling, that can handle the current conditions. We evaluate our system empirically in a wireless testbed employing a shielded environment to ensure reproducible channel conditions. A series of measurements demonstrates predictability of latency and potential limits for wireless networked control.","PeriodicalId":131097,"journal":{"name":"Proceedings of the 2nd International Workshop on Edge Systems, Analytics and Networking","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124809624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Checkpointing and Migration of IoT Edge Functions 物联网边缘功能的检查点和迁移
Pekka Karhula, J. Janak, H. Schulzrinne
The serverless and functions as a service (FaaS) paradigms are currently trending among cloud providers and are now increasingly being applied to the network edge, and to the Internet of Things (IoT) devices. The benefits include reduced latency for communication, less network traffic and increased privacy for data processing. However, there are challenges as IoT devices have limited resources for running multiple simultaneous containerized functions, and also FaaS does not typically support long-running functions. Our implementation utilizes Docker and CRIU for checkpointing and suspending long-running blocking functions. The results show that checkpointing is slightly slower than regular Docker pause, but it saves memory and allows for more long-running functions to be run on an IoT device. Furthermore, the resulting checkpoint files are small, hence they are suitable for live migration and backing up stateful functions, therefore improving availability and reliability of the system.
无服务器和功能即服务(FaaS)范式目前是云提供商的趋势,现在越来越多地应用于网络边缘和物联网(IoT)设备。其好处包括减少通信延迟、减少网络流量和提高数据处理的隐私性。然而,由于物联网设备用于同时运行多个容器化功能的资源有限,并且FaaS通常不支持长时间运行的功能,因此存在一些挑战。我们的实现利用Docker和CRIU来检查点和挂起长时间运行的阻塞函数。结果表明,检查点比常规的Docker暂停稍微慢一些,但它节省了内存,并允许在IoT设备上运行更多长时间运行的功能。此外,生成的检查点文件很小,因此它们适合实时迁移和备份有状态功能,从而提高系统的可用性和可靠性。
{"title":"Checkpointing and Migration of IoT Edge Functions","authors":"Pekka Karhula, J. Janak, H. Schulzrinne","doi":"10.1145/3301418.3313947","DOIUrl":"https://doi.org/10.1145/3301418.3313947","url":null,"abstract":"The serverless and functions as a service (FaaS) paradigms are currently trending among cloud providers and are now increasingly being applied to the network edge, and to the Internet of Things (IoT) devices. The benefits include reduced latency for communication, less network traffic and increased privacy for data processing. However, there are challenges as IoT devices have limited resources for running multiple simultaneous containerized functions, and also FaaS does not typically support long-running functions. Our implementation utilizes Docker and CRIU for checkpointing and suspending long-running blocking functions. The results show that checkpointing is slightly slower than regular Docker pause, but it saves memory and allows for more long-running functions to be run on an IoT device. Furthermore, the resulting checkpoint files are small, hence they are suitable for live migration and backing up stateful functions, therefore improving availability and reliability of the system.","PeriodicalId":131097,"journal":{"name":"Proceedings of the 2nd International Workshop on Edge Systems, Analytics and Networking","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129433123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Edge Chaining Framework for Black Ice Road Fingerprinting 边缘链框架黑冰道路指纹
Vittorio Cozzolino, A. Ding, J. Ott
Detecting and reacting efficiently to road condition hazards are challenging given practical restrictions such as limited data availability and lack of infrastructure support. In this paper, we present an edge-cloud chaining solution that bridges the cloud and road infrastructures to enhance road safety. We exploit the roadside infrastructure (e.g., smart lampposts) to form a processing chain at the edge nodes and transmit the essential context to approaching vehicles providing what we refer as road fingerprinting. We approach the problem from two angles: first we focus on semantically defining how an execution pipeline spanning edge and cloud is composed, then we design, implement and evaluate a working prototype based on our assumptions. In addition, we present experimental insights and outline open challenges for next steps.
由于数据可用性有限和缺乏基础设施支持等实际限制,检测和有效应对路况危险具有挑战性。在本文中,我们提出了一种边缘云链解决方案,该解决方案将云和道路基础设施连接起来,以增强道路安全。我们利用路边基础设施(例如,智能灯柱)在边缘节点形成处理链,并将基本背景传输给接近的车辆,提供我们所说的道路指纹。我们从两个角度来解决这个问题:首先,我们专注于语义上定义跨边缘和云的执行管道是如何组成的,然后我们根据我们的假设设计、实现和评估一个工作原型。此外,我们提出了实验见解并概述了下一步的开放挑战。
{"title":"Edge Chaining Framework for Black Ice Road Fingerprinting","authors":"Vittorio Cozzolino, A. Ding, J. Ott","doi":"10.1145/3301418.3313944","DOIUrl":"https://doi.org/10.1145/3301418.3313944","url":null,"abstract":"Detecting and reacting efficiently to road condition hazards are challenging given practical restrictions such as limited data availability and lack of infrastructure support. In this paper, we present an edge-cloud chaining solution that bridges the cloud and road infrastructures to enhance road safety. We exploit the roadside infrastructure (e.g., smart lampposts) to form a processing chain at the edge nodes and transmit the essential context to approaching vehicles providing what we refer as road fingerprinting. We approach the problem from two angles: first we focus on semantically defining how an execution pipeline spanning edge and cloud is composed, then we design, implement and evaluate a working prototype based on our assumptions. In addition, we present experimental insights and outline open challenges for next steps.","PeriodicalId":131097,"journal":{"name":"Proceedings of the 2nd International Workshop on Edge Systems, Analytics and Networking","volume":"171 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133640449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Energy-Aware Speculative Execution in Vehicular Edge Computing Systems 车辆边缘计算系统中能量感知的推测执行
Tayebeh Bahreini, Marco Brocanelli, Daniel Grosu
We address the problem of energy-aware optimization of speculative execution in vehicular edge computing systems, where multiple copies of a workload are executed on a number of different nodes to ensure high reliability and performance. The objective is to minimize the energy consumption over multiple time periods while minimizing the latency for each of the periods. We prove that the problem is NP-hard and propose a greedy algorithm to solve it in polynomial time. We evaluate the performance of the proposed algorithm by conducting an extensive experimental analysis. The experimental results indicate that the proposed algorithm obtains near optimal solutions within a reasonable amount of time.
我们解决了车辆边缘计算系统中推测执行的能量感知优化问题,其中在许多不同节点上执行工作负载的多个副本,以确保高可靠性和性能。目标是最小化多个时间段的能耗,同时最小化每个时间段的延迟。我们证明了这个问题是np困难的,并提出了一个贪心算法在多项式时间内解决它。我们通过进行广泛的实验分析来评估所提出算法的性能。实验结果表明,该算法能在合理的时间内获得近似最优解。
{"title":"Energy-Aware Speculative Execution in Vehicular Edge Computing Systems","authors":"Tayebeh Bahreini, Marco Brocanelli, Daniel Grosu","doi":"10.1145/3301418.3313940","DOIUrl":"https://doi.org/10.1145/3301418.3313940","url":null,"abstract":"We address the problem of energy-aware optimization of speculative execution in vehicular edge computing systems, where multiple copies of a workload are executed on a number of different nodes to ensure high reliability and performance. The objective is to minimize the energy consumption over multiple time periods while minimizing the latency for each of the periods. We prove that the problem is NP-hard and propose a greedy algorithm to solve it in polynomial time. We evaluate the performance of the proposed algorithm by conducting an extensive experimental analysis. The experimental results indicate that the proposed algorithm obtains near optimal solutions within a reasonable amount of time.","PeriodicalId":131097,"journal":{"name":"Proceedings of the 2nd International Workshop on Edge Systems, Analytics and Networking","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115517569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Snape: The Dark Art of Handling Heterogeneous Enclaves 斯内普:处理异质飞地的黑魔法
Zahra Tarkhani, Anil Madhavapeddy, R. Mortier
Code executing on the edge needs to run on hardware platforms that feature different memory architectures, virtualization extensions, and using a range of security features. Forcing application code to conform to a monolithic API such as POSIX, or ABI such as Linux, ties developers into large, complex platforms that make it difficult to use such hardware-specific features effectively as well as coming with their own baggage and the attendant security issues. As edge computing proliferates, handling increasingly sensitive and intimate data in our everyday lives, it becomes important for developers to be able to use all the hardware resources of their particular platform, correctly and efficiently. To this end, we propose Snape, an API and composable platform for matching applications' needs to the available hardware features in a heterogeneous environment. Unlike existing solutions, Snape provides applications with a flexible trust model and replaces untrusted host OS services with corresponding hw-assisted secured services. We report experience with our proof-of-concept implementation that enables Solo5 unikernels on Raspberry Pi 3 boards to make effective use of ARM TrustZone security technology.
在边缘执行的代码需要在具有不同内存架构、虚拟化扩展和使用一系列安全特性的硬件平台上运行。强制应用程序代码遵循单一API(如POSIX)或ABI(如Linux),将开发人员束缚在大型复杂平台中,这使得很难有效地使用这些特定于硬件的特性,并且带来了自己的包袱和随之而来的安全问题。随着边缘计算的激增,在我们的日常生活中处理越来越敏感和亲密的数据,对于开发人员来说,能够正确有效地使用其特定平台的所有硬件资源变得非常重要。为此,我们提出了Snape,这是一个API和可组合平台,用于将应用程序的需求与异构环境中可用的硬件特性相匹配。与现有的解决方案不同,Snape为应用程序提供了灵活的信任模型,并用相应的hw辅助的安全服务替换不受信任的主机操作系统服务。我们报告了我们的概念验证实现的经验,该实现使树莓派3板上的Solo5单内核能够有效地利用ARM TrustZone安全技术。
{"title":"Snape: The Dark Art of Handling Heterogeneous Enclaves","authors":"Zahra Tarkhani, Anil Madhavapeddy, R. Mortier","doi":"10.1145/3301418.3313945","DOIUrl":"https://doi.org/10.1145/3301418.3313945","url":null,"abstract":"Code executing on the edge needs to run on hardware platforms that feature different memory architectures, virtualization extensions, and using a range of security features. Forcing application code to conform to a monolithic API such as POSIX, or ABI such as Linux, ties developers into large, complex platforms that make it difficult to use such hardware-specific features effectively as well as coming with their own baggage and the attendant security issues. As edge computing proliferates, handling increasingly sensitive and intimate data in our everyday lives, it becomes important for developers to be able to use all the hardware resources of their particular platform, correctly and efficiently. To this end, we propose Snape, an API and composable platform for matching applications' needs to the available hardware features in a heterogeneous environment. Unlike existing solutions, Snape provides applications with a flexible trust model and replaces untrusted host OS services with corresponding hw-assisted secured services. We report experience with our proof-of-concept implementation that enables Solo5 unikernels on Raspberry Pi 3 boards to make effective use of ARM TrustZone security technology.","PeriodicalId":131097,"journal":{"name":"Proceedings of the 2nd International Workshop on Edge Systems, Analytics and Networking","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115574185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Reality Check on Inference at Mobile Networks Edge 移动网络边缘推断的现实检验
Alejandro Cartas, M. Kocour, Aravindh Raman, Ilias Leontiadis, J. Luque, Nishanth R. Sastry, José Núñez-Martínez, Diego Perino, C. Segura
Edge computing is considered a key enabler to deploy Artificial Intelligence platforms to provide real-time applications such as AR/VR or cognitive assistance. Previous works show computing capabilities deployed very close to the user can actually reduce the end-to-end latency of such interactive applications. Nonetheless, the main performance bottleneck remains in the machine learning inference operation. In this paper, we question some assumptions of these works, as the network location where edge computing is deployed, and considered software architectures within the framework of a couple of popular machine learning tasks. Our experimental evaluation shows that after performance tuning that leverages recent advances in deep learning algorithms and hardware, network latency is now the main bottleneck on end-to-end application performance. We also report that deploying computing capabilities at the first network node still provides latency reduction but, overall, it is not required by all applications. Based on our findings, we overview the requirements and sketch the design of an adaptive architecture for general machine learning inference across edge locations.
边缘计算被认为是部署人工智能平台以提供AR/VR或认知辅助等实时应用的关键推动者。以前的工作表明,将计算能力部署在离用户非常近的地方,实际上可以减少这种交互式应用程序的端到端延迟。然而,主要的性能瓶颈仍然存在于机器学习推理操作中。在本文中,我们质疑这些工作的一些假设,作为部署边缘计算的网络位置,并在几个流行的机器学习任务的框架内考虑软件架构。我们的实验评估表明,在利用深度学习算法和硬件的最新进展进行性能调优之后,网络延迟现在是端到端应用程序性能的主要瓶颈。我们还报告说,在第一个网络节点部署计算能力仍然可以减少延迟,但总体而言,并非所有应用程序都需要这样做。根据我们的研究结果,我们概述了需求并概述了跨边缘位置的通用机器学习推理的自适应架构的设计。
{"title":"A Reality Check on Inference at Mobile Networks Edge","authors":"Alejandro Cartas, M. Kocour, Aravindh Raman, Ilias Leontiadis, J. Luque, Nishanth R. Sastry, José Núñez-Martínez, Diego Perino, C. Segura","doi":"10.1145/3301418.3313946","DOIUrl":"https://doi.org/10.1145/3301418.3313946","url":null,"abstract":"Edge computing is considered a key enabler to deploy Artificial Intelligence platforms to provide real-time applications such as AR/VR or cognitive assistance. Previous works show computing capabilities deployed very close to the user can actually reduce the end-to-end latency of such interactive applications. Nonetheless, the main performance bottleneck remains in the machine learning inference operation. In this paper, we question some assumptions of these works, as the network location where edge computing is deployed, and considered software architectures within the framework of a couple of popular machine learning tasks. Our experimental evaluation shows that after performance tuning that leverages recent advances in deep learning algorithms and hardware, network latency is now the main bottleneck on end-to-end application performance. We also report that deploying computing capabilities at the first network node still provides latency reduction but, overall, it is not required by all applications. Based on our findings, we overview the requirements and sketch the design of an adaptive architecture for general machine learning inference across edge locations.","PeriodicalId":131097,"journal":{"name":"Proceedings of the 2nd International Workshop on Edge Systems, Analytics and Networking","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124729209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
The Web Browser as Distributed Application Server: Towards Decentralized Web Applications in the Edge Web浏览器作为分布式应用服务器:走向边缘的去中心化Web应用
Kristof Jannes, B. Lagaisse, W. Joosen
Web applications are evolving to a decentralized, client-centric architecture in which browsers need to be able to put the user back in control of their personal data, need to be able to operate in disconnected settings, and need to offload the web server as much as possible. This paper presents a set of key application scenarios and trends in different business domains that require a more client-centric and data-centric web middleware for decentralized, peer-to-peer web applications in the edge. We define a set of key requirements for data operations in such middleware and motivate them with the application cases. This paper further discusses the current state and limitations of the browser as a platform for peer-to-peer communication and complex decentralized applications with shared data. We conclude with a performance assessment of our first prototype middleware for client-centric and data-centric peer-to-peer web applications.
Web应用程序正在向分散的、以客户端为中心的体系结构发展,在这种体系结构中,浏览器需要能够让用户重新控制他们的个人数据,需要能够在断开连接的设置中操作,并且需要尽可能地卸载Web服务器。本文介绍了不同业务领域的一组关键应用场景和趋势,这些领域需要更加以客户端为中心和以数据为中心的web中间件,以用于分散的、点对点的边缘web应用程序。我们为这种中间件中的数据操作定义了一组关键需求,并用应用程序案例激励它们。本文进一步讨论了浏览器作为点对点通信和具有共享数据的复杂分散应用程序平台的现状和局限性。最后,我们对以客户为中心和以数据为中心的对等web应用程序的第一个原型中间件进行了性能评估。
{"title":"The Web Browser as Distributed Application Server: Towards Decentralized Web Applications in the Edge","authors":"Kristof Jannes, B. Lagaisse, W. Joosen","doi":"10.1145/3301418.3313938","DOIUrl":"https://doi.org/10.1145/3301418.3313938","url":null,"abstract":"Web applications are evolving to a decentralized, client-centric architecture in which browsers need to be able to put the user back in control of their personal data, need to be able to operate in disconnected settings, and need to offload the web server as much as possible. This paper presents a set of key application scenarios and trends in different business domains that require a more client-centric and data-centric web middleware for decentralized, peer-to-peer web applications in the edge. We define a set of key requirements for data operations in such middleware and motivate them with the application cases. This paper further discusses the current state and limitations of the browser as a platform for peer-to-peer communication and complex decentralized applications with shared data. We conclude with a performance assessment of our first prototype middleware for client-centric and data-centric peer-to-peer web applications.","PeriodicalId":131097,"journal":{"name":"Proceedings of the 2nd International Workshop on Edge Systems, Analytics and Networking","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126325745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
ExEC: Elastic Extensible Edge Cloud ExEC:弹性可扩展边缘云
Aleksandr Zavodovski, Nitinder Mohan, S. Bayhan, Walter Wong, J. Kangasharju
Edge computing (EC) extends the centralized cloud computing paradigm by bringing computation into close proximity to the end-users, to the edge of the network, and is a key enabler for applications requiring low latency such as augmented reality or content delivery. To make EC pervasive, the following challenges must be tackled: how to satisfy the growing demand for edge computing facilities, how to discover the nearby edge servers, and how to securely access them? In this paper, we present ExEC, an open framework where edge providers can offer their capacity and be discovered by application providers and end-users. ExEC aims at the unification of interaction between edge and cloud providers so that cloud providers can utilize services of third-party edge providers, and any willing entity can easily become an edge provider. In ExEC, the unfolding of initially cloud-deployed application towards edge happens without administrative intervention, since ExEC discovers available edge providers on the fly and monitors incoming end-user traffic, determining the near-optimal placement of edge services. ExEC is a set of loosely coupled components and common practices, allowing for custom implementations needed to embrace the diverse needs of specific EC scenarios. ExEC leverages only existing protocols and requires no modifications to the deployed infrastructure. Using real-world topology data and experiments on cloud platforms, we demonstrate the feasibility of ExEC and present results on its expected performance.
边缘计算(EC)扩展了集中式云计算范例,使计算更接近最终用户和网络边缘,并且是需要低延迟的应用程序(如增强现实或内容交付)的关键推动因素。要使电子商务普及,必须解决以下挑战:如何满足对边缘计算设施日益增长的需求,如何发现附近的边缘服务器,以及如何安全地访问它们?在本文中,我们介绍了ExEC,这是一个开放框架,边缘提供商可以在其中提供其容量,并被应用程序提供商和最终用户发现。ExEC旨在统一边缘和云提供商之间的交互,使云提供商可以利用第三方边缘提供商的服务,任何愿意的实体都可以轻松成为边缘提供商。在ExEC中,最初的云部署应用程序向边缘的展开是在没有管理干预的情况下进行的,因为ExEC可以动态地发现可用的边缘提供商,并监控传入的最终用户流量,确定边缘服务的近乎最佳的位置。ExEC是一组松散耦合的组件和常用实践,允许定制实现以满足特定EC场景的不同需求。ExEC仅利用现有协议,不需要修改已部署的基础设施。利用实际拓扑数据和云平台上的实验,我们证明了ExEC的可行性,并给出了其预期性能的结果。
{"title":"ExEC: Elastic Extensible Edge Cloud","authors":"Aleksandr Zavodovski, Nitinder Mohan, S. Bayhan, Walter Wong, J. Kangasharju","doi":"10.1145/3301418.3313941","DOIUrl":"https://doi.org/10.1145/3301418.3313941","url":null,"abstract":"Edge computing (EC) extends the centralized cloud computing paradigm by bringing computation into close proximity to the end-users, to the edge of the network, and is a key enabler for applications requiring low latency such as augmented reality or content delivery. To make EC pervasive, the following challenges must be tackled: how to satisfy the growing demand for edge computing facilities, how to discover the nearby edge servers, and how to securely access them? In this paper, we present ExEC, an open framework where edge providers can offer their capacity and be discovered by application providers and end-users. ExEC aims at the unification of interaction between edge and cloud providers so that cloud providers can utilize services of third-party edge providers, and any willing entity can easily become an edge provider. In ExEC, the unfolding of initially cloud-deployed application towards edge happens without administrative intervention, since ExEC discovers available edge providers on the fly and monitors incoming end-user traffic, determining the near-optimal placement of edge services. ExEC is a set of loosely coupled components and common practices, allowing for custom implementations needed to embrace the diverse needs of specific EC scenarios. ExEC leverages only existing protocols and requires no modifications to the deployed infrastructure. Using real-world topology data and experiments on cloud platforms, we demonstrate the feasibility of ExEC and present results on its expected performance.","PeriodicalId":131097,"journal":{"name":"Proceedings of the 2nd International Workshop on Edge Systems, Analytics and Networking","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125990395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
期刊
Proceedings of the 2nd International Workshop on Edge Systems, Analytics and Networking
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1