Ghaith Hattab, Seyhan Uçar, Takamasa Higuchi, O. Altintas, F. Dressler, D. Cabric
The ever-increasing advancements of vehicles have not only made them mobile devices with Internet connectivity, but also have pushed vehicles to become powerful computing resources. To this end, a cluster of vehicles can form a vehicular micro cloud, creating a virtual edge server and providing the computational resources needed for edge-based services. In this paper, we study the assignment of computational tasks among micro cloud vehicles of different computing resources. In particular, we formulate a bottleneck assignment problem, where the objective is to minimize the completion time of tasks assigned to available vehicles in the micro cloud. A two-stage algorithm, with polynomial-time complexity, is proposed to solve the problem. We use Monte Carlo simulations to validate the effectiveness of the proposed algorithm in two micro cloud scenarios: a parking structure and an intersection in Manhattan grid. It is shown that the algorithm significantly outperforms random assignment in completion time. For example, compared to the proposed algorithm, the completion time is 3.6x longer with random assignment when the number of cars is large, and it is 2.1x longer when the tasks have more varying requirements.
{"title":"Optimized Assignment of Computational Tasks in Vehicular Micro Clouds","authors":"Ghaith Hattab, Seyhan Uçar, Takamasa Higuchi, O. Altintas, F. Dressler, D. Cabric","doi":"10.1145/3301418.3313937","DOIUrl":"https://doi.org/10.1145/3301418.3313937","url":null,"abstract":"The ever-increasing advancements of vehicles have not only made them mobile devices with Internet connectivity, but also have pushed vehicles to become powerful computing resources. To this end, a cluster of vehicles can form a vehicular micro cloud, creating a virtual edge server and providing the computational resources needed for edge-based services. In this paper, we study the assignment of computational tasks among micro cloud vehicles of different computing resources. In particular, we formulate a bottleneck assignment problem, where the objective is to minimize the completion time of tasks assigned to available vehicles in the micro cloud. A two-stage algorithm, with polynomial-time complexity, is proposed to solve the problem. We use Monte Carlo simulations to validate the effectiveness of the proposed algorithm in two micro cloud scenarios: a parking structure and an intersection in Manhattan grid. It is shown that the algorithm significantly outperforms random assignment in completion time. For example, compared to the proposed algorithm, the completion time is 3.6x longer with random assignment when the number of cars is large, and it is 2.1x longer when the tasks have more varying requirements.","PeriodicalId":131097,"journal":{"name":"Proceedings of the 2nd International Workshop on Edge Systems, Analytics and Networking","volume":"62 21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122647557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Trinelli, Massimo Gallo, M. Rifai, Fabio Pianese
Mobile devices are increasingly capable of supporting advanced functionalities but still face fundamental resource limitations. While the development of custom accelerators for compute-intensive functions is progressing, precious battery life and quality vs. latency trade-offs are limiting the potential of applications relying on processing real-time, computational-intensive functions, such as Augmented Reality. Transparent network support for on-the-fly media processing at the edge can significantly extend the capabilities of mobile devices without the need for API changes. In this paper we introduce NEAR, a framework for transparent live video processing and augmentation at the network edge, along with its architecture and preliminary performance evaluation in an object detection use case.
{"title":"Transparent AR Processing Acceleration at the Edge","authors":"M. Trinelli, Massimo Gallo, M. Rifai, Fabio Pianese","doi":"10.1145/3301418.3313942","DOIUrl":"https://doi.org/10.1145/3301418.3313942","url":null,"abstract":"Mobile devices are increasingly capable of supporting advanced functionalities but still face fundamental resource limitations. While the development of custom accelerators for compute-intensive functions is progressing, precious battery life and quality vs. latency trade-offs are limiting the potential of applications relying on processing real-time, computational-intensive functions, such as Augmented Reality. Transparent network support for on-the-fly media processing at the edge can significantly extend the capabilities of mobile devices without the need for API changes. In this paper we introduce NEAR, a framework for transparent live video processing and augmentation at the network edge, along with its architecture and preliminary performance evaluation in an object detection use case.","PeriodicalId":131097,"journal":{"name":"Proceedings of the 2nd International Workshop on Edge Systems, Analytics and Networking","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114989289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sebastian Gallenmüller, René Glebke, Stephan M. Günther, Eric Hauser, Maurice Leclaire, S. Reif, Jan Rüth, Andreas Schmidt, G. Carle, T. Herfet, Wolfgang Schröder-Preikschat, Klaus Wehrle
To enable cooperation of cyber-physical systems in latency-critical scenarios, control algorithms are placed in edge systems communicating with sensors and actuators via wireless channels. The shift from wired towards wireless communication is accompanied by an inherent lack of predictability due to interference and mobility. The state of the art in distributed controller design is proactive in nature, modeling and predicting (and potentially oversimplifying) channel properties stochastically or pessimistically, i. e., worst-case considerations. In contrast, we present a system based on a real-time transport protocol that is aware of application-level constraints and applies run-time measurements for channel properties. Our run-time system utilizes this information to select appropriate controller instances, i. e., gain scheduling, that can handle the current conditions. We evaluate our system empirically in a wireless testbed employing a shielded environment to ensure reproducible channel conditions. A series of measurements demonstrates predictability of latency and potential limits for wireless networked control.
{"title":"Enabling Wireless Network Support for Gain Scheduled Control","authors":"Sebastian Gallenmüller, René Glebke, Stephan M. Günther, Eric Hauser, Maurice Leclaire, S. Reif, Jan Rüth, Andreas Schmidt, G. Carle, T. Herfet, Wolfgang Schröder-Preikschat, Klaus Wehrle","doi":"10.1145/3301418.3313943","DOIUrl":"https://doi.org/10.1145/3301418.3313943","url":null,"abstract":"To enable cooperation of cyber-physical systems in latency-critical scenarios, control algorithms are placed in edge systems communicating with sensors and actuators via wireless channels. The shift from wired towards wireless communication is accompanied by an inherent lack of predictability due to interference and mobility. The state of the art in distributed controller design is proactive in nature, modeling and predicting (and potentially oversimplifying) channel properties stochastically or pessimistically, i. e., worst-case considerations. In contrast, we present a system based on a real-time transport protocol that is aware of application-level constraints and applies run-time measurements for channel properties. Our run-time system utilizes this information to select appropriate controller instances, i. e., gain scheduling, that can handle the current conditions. We evaluate our system empirically in a wireless testbed employing a shielded environment to ensure reproducible channel conditions. A series of measurements demonstrates predictability of latency and potential limits for wireless networked control.","PeriodicalId":131097,"journal":{"name":"Proceedings of the 2nd International Workshop on Edge Systems, Analytics and Networking","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124809624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The serverless and functions as a service (FaaS) paradigms are currently trending among cloud providers and are now increasingly being applied to the network edge, and to the Internet of Things (IoT) devices. The benefits include reduced latency for communication, less network traffic and increased privacy for data processing. However, there are challenges as IoT devices have limited resources for running multiple simultaneous containerized functions, and also FaaS does not typically support long-running functions. Our implementation utilizes Docker and CRIU for checkpointing and suspending long-running blocking functions. The results show that checkpointing is slightly slower than regular Docker pause, but it saves memory and allows for more long-running functions to be run on an IoT device. Furthermore, the resulting checkpoint files are small, hence they are suitable for live migration and backing up stateful functions, therefore improving availability and reliability of the system.
{"title":"Checkpointing and Migration of IoT Edge Functions","authors":"Pekka Karhula, J. Janak, H. Schulzrinne","doi":"10.1145/3301418.3313947","DOIUrl":"https://doi.org/10.1145/3301418.3313947","url":null,"abstract":"The serverless and functions as a service (FaaS) paradigms are currently trending among cloud providers and are now increasingly being applied to the network edge, and to the Internet of Things (IoT) devices. The benefits include reduced latency for communication, less network traffic and increased privacy for data processing. However, there are challenges as IoT devices have limited resources for running multiple simultaneous containerized functions, and also FaaS does not typically support long-running functions. Our implementation utilizes Docker and CRIU for checkpointing and suspending long-running blocking functions. The results show that checkpointing is slightly slower than regular Docker pause, but it saves memory and allows for more long-running functions to be run on an IoT device. Furthermore, the resulting checkpoint files are small, hence they are suitable for live migration and backing up stateful functions, therefore improving availability and reliability of the system.","PeriodicalId":131097,"journal":{"name":"Proceedings of the 2nd International Workshop on Edge Systems, Analytics and Networking","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129433123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Detecting and reacting efficiently to road condition hazards are challenging given practical restrictions such as limited data availability and lack of infrastructure support. In this paper, we present an edge-cloud chaining solution that bridges the cloud and road infrastructures to enhance road safety. We exploit the roadside infrastructure (e.g., smart lampposts) to form a processing chain at the edge nodes and transmit the essential context to approaching vehicles providing what we refer as road fingerprinting. We approach the problem from two angles: first we focus on semantically defining how an execution pipeline spanning edge and cloud is composed, then we design, implement and evaluate a working prototype based on our assumptions. In addition, we present experimental insights and outline open challenges for next steps.
{"title":"Edge Chaining Framework for Black Ice Road Fingerprinting","authors":"Vittorio Cozzolino, A. Ding, J. Ott","doi":"10.1145/3301418.3313944","DOIUrl":"https://doi.org/10.1145/3301418.3313944","url":null,"abstract":"Detecting and reacting efficiently to road condition hazards are challenging given practical restrictions such as limited data availability and lack of infrastructure support. In this paper, we present an edge-cloud chaining solution that bridges the cloud and road infrastructures to enhance road safety. We exploit the roadside infrastructure (e.g., smart lampposts) to form a processing chain at the edge nodes and transmit the essential context to approaching vehicles providing what we refer as road fingerprinting. We approach the problem from two angles: first we focus on semantically defining how an execution pipeline spanning edge and cloud is composed, then we design, implement and evaluate a working prototype based on our assumptions. In addition, we present experimental insights and outline open challenges for next steps.","PeriodicalId":131097,"journal":{"name":"Proceedings of the 2nd International Workshop on Edge Systems, Analytics and Networking","volume":"171 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133640449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We address the problem of energy-aware optimization of speculative execution in vehicular edge computing systems, where multiple copies of a workload are executed on a number of different nodes to ensure high reliability and performance. The objective is to minimize the energy consumption over multiple time periods while minimizing the latency for each of the periods. We prove that the problem is NP-hard and propose a greedy algorithm to solve it in polynomial time. We evaluate the performance of the proposed algorithm by conducting an extensive experimental analysis. The experimental results indicate that the proposed algorithm obtains near optimal solutions within a reasonable amount of time.
{"title":"Energy-Aware Speculative Execution in Vehicular Edge Computing Systems","authors":"Tayebeh Bahreini, Marco Brocanelli, Daniel Grosu","doi":"10.1145/3301418.3313940","DOIUrl":"https://doi.org/10.1145/3301418.3313940","url":null,"abstract":"We address the problem of energy-aware optimization of speculative execution in vehicular edge computing systems, where multiple copies of a workload are executed on a number of different nodes to ensure high reliability and performance. The objective is to minimize the energy consumption over multiple time periods while minimizing the latency for each of the periods. We prove that the problem is NP-hard and propose a greedy algorithm to solve it in polynomial time. We evaluate the performance of the proposed algorithm by conducting an extensive experimental analysis. The experimental results indicate that the proposed algorithm obtains near optimal solutions within a reasonable amount of time.","PeriodicalId":131097,"journal":{"name":"Proceedings of the 2nd International Workshop on Edge Systems, Analytics and Networking","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115517569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Code executing on the edge needs to run on hardware platforms that feature different memory architectures, virtualization extensions, and using a range of security features. Forcing application code to conform to a monolithic API such as POSIX, or ABI such as Linux, ties developers into large, complex platforms that make it difficult to use such hardware-specific features effectively as well as coming with their own baggage and the attendant security issues. As edge computing proliferates, handling increasingly sensitive and intimate data in our everyday lives, it becomes important for developers to be able to use all the hardware resources of their particular platform, correctly and efficiently. To this end, we propose Snape, an API and composable platform for matching applications' needs to the available hardware features in a heterogeneous environment. Unlike existing solutions, Snape provides applications with a flexible trust model and replaces untrusted host OS services with corresponding hw-assisted secured services. We report experience with our proof-of-concept implementation that enables Solo5 unikernels on Raspberry Pi 3 boards to make effective use of ARM TrustZone security technology.
{"title":"Snape: The Dark Art of Handling Heterogeneous Enclaves","authors":"Zahra Tarkhani, Anil Madhavapeddy, R. Mortier","doi":"10.1145/3301418.3313945","DOIUrl":"https://doi.org/10.1145/3301418.3313945","url":null,"abstract":"Code executing on the edge needs to run on hardware platforms that feature different memory architectures, virtualization extensions, and using a range of security features. Forcing application code to conform to a monolithic API such as POSIX, or ABI such as Linux, ties developers into large, complex platforms that make it difficult to use such hardware-specific features effectively as well as coming with their own baggage and the attendant security issues. As edge computing proliferates, handling increasingly sensitive and intimate data in our everyday lives, it becomes important for developers to be able to use all the hardware resources of their particular platform, correctly and efficiently. To this end, we propose Snape, an API and composable platform for matching applications' needs to the available hardware features in a heterogeneous environment. Unlike existing solutions, Snape provides applications with a flexible trust model and replaces untrusted host OS services with corresponding hw-assisted secured services. We report experience with our proof-of-concept implementation that enables Solo5 unikernels on Raspberry Pi 3 boards to make effective use of ARM TrustZone security technology.","PeriodicalId":131097,"journal":{"name":"Proceedings of the 2nd International Workshop on Edge Systems, Analytics and Networking","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115574185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alejandro Cartas, M. Kocour, Aravindh Raman, Ilias Leontiadis, J. Luque, Nishanth R. Sastry, José Núñez-Martínez, Diego Perino, C. Segura
Edge computing is considered a key enabler to deploy Artificial Intelligence platforms to provide real-time applications such as AR/VR or cognitive assistance. Previous works show computing capabilities deployed very close to the user can actually reduce the end-to-end latency of such interactive applications. Nonetheless, the main performance bottleneck remains in the machine learning inference operation. In this paper, we question some assumptions of these works, as the network location where edge computing is deployed, and considered software architectures within the framework of a couple of popular machine learning tasks. Our experimental evaluation shows that after performance tuning that leverages recent advances in deep learning algorithms and hardware, network latency is now the main bottleneck on end-to-end application performance. We also report that deploying computing capabilities at the first network node still provides latency reduction but, overall, it is not required by all applications. Based on our findings, we overview the requirements and sketch the design of an adaptive architecture for general machine learning inference across edge locations.
{"title":"A Reality Check on Inference at Mobile Networks Edge","authors":"Alejandro Cartas, M. Kocour, Aravindh Raman, Ilias Leontiadis, J. Luque, Nishanth R. Sastry, José Núñez-Martínez, Diego Perino, C. Segura","doi":"10.1145/3301418.3313946","DOIUrl":"https://doi.org/10.1145/3301418.3313946","url":null,"abstract":"Edge computing is considered a key enabler to deploy Artificial Intelligence platforms to provide real-time applications such as AR/VR or cognitive assistance. Previous works show computing capabilities deployed very close to the user can actually reduce the end-to-end latency of such interactive applications. Nonetheless, the main performance bottleneck remains in the machine learning inference operation. In this paper, we question some assumptions of these works, as the network location where edge computing is deployed, and considered software architectures within the framework of a couple of popular machine learning tasks. Our experimental evaluation shows that after performance tuning that leverages recent advances in deep learning algorithms and hardware, network latency is now the main bottleneck on end-to-end application performance. We also report that deploying computing capabilities at the first network node still provides latency reduction but, overall, it is not required by all applications. Based on our findings, we overview the requirements and sketch the design of an adaptive architecture for general machine learning inference across edge locations.","PeriodicalId":131097,"journal":{"name":"Proceedings of the 2nd International Workshop on Edge Systems, Analytics and Networking","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124729209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Web applications are evolving to a decentralized, client-centric architecture in which browsers need to be able to put the user back in control of their personal data, need to be able to operate in disconnected settings, and need to offload the web server as much as possible. This paper presents a set of key application scenarios and trends in different business domains that require a more client-centric and data-centric web middleware for decentralized, peer-to-peer web applications in the edge. We define a set of key requirements for data operations in such middleware and motivate them with the application cases. This paper further discusses the current state and limitations of the browser as a platform for peer-to-peer communication and complex decentralized applications with shared data. We conclude with a performance assessment of our first prototype middleware for client-centric and data-centric peer-to-peer web applications.
{"title":"The Web Browser as Distributed Application Server: Towards Decentralized Web Applications in the Edge","authors":"Kristof Jannes, B. Lagaisse, W. Joosen","doi":"10.1145/3301418.3313938","DOIUrl":"https://doi.org/10.1145/3301418.3313938","url":null,"abstract":"Web applications are evolving to a decentralized, client-centric architecture in which browsers need to be able to put the user back in control of their personal data, need to be able to operate in disconnected settings, and need to offload the web server as much as possible. This paper presents a set of key application scenarios and trends in different business domains that require a more client-centric and data-centric web middleware for decentralized, peer-to-peer web applications in the edge. We define a set of key requirements for data operations in such middleware and motivate them with the application cases. This paper further discusses the current state and limitations of the browser as a platform for peer-to-peer communication and complex decentralized applications with shared data. We conclude with a performance assessment of our first prototype middleware for client-centric and data-centric peer-to-peer web applications.","PeriodicalId":131097,"journal":{"name":"Proceedings of the 2nd International Workshop on Edge Systems, Analytics and Networking","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126325745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aleksandr Zavodovski, Nitinder Mohan, S. Bayhan, Walter Wong, J. Kangasharju
Edge computing (EC) extends the centralized cloud computing paradigm by bringing computation into close proximity to the end-users, to the edge of the network, and is a key enabler for applications requiring low latency such as augmented reality or content delivery. To make EC pervasive, the following challenges must be tackled: how to satisfy the growing demand for edge computing facilities, how to discover the nearby edge servers, and how to securely access them? In this paper, we present ExEC, an open framework where edge providers can offer their capacity and be discovered by application providers and end-users. ExEC aims at the unification of interaction between edge and cloud providers so that cloud providers can utilize services of third-party edge providers, and any willing entity can easily become an edge provider. In ExEC, the unfolding of initially cloud-deployed application towards edge happens without administrative intervention, since ExEC discovers available edge providers on the fly and monitors incoming end-user traffic, determining the near-optimal placement of edge services. ExEC is a set of loosely coupled components and common practices, allowing for custom implementations needed to embrace the diverse needs of specific EC scenarios. ExEC leverages only existing protocols and requires no modifications to the deployed infrastructure. Using real-world topology data and experiments on cloud platforms, we demonstrate the feasibility of ExEC and present results on its expected performance.
{"title":"ExEC: Elastic Extensible Edge Cloud","authors":"Aleksandr Zavodovski, Nitinder Mohan, S. Bayhan, Walter Wong, J. Kangasharju","doi":"10.1145/3301418.3313941","DOIUrl":"https://doi.org/10.1145/3301418.3313941","url":null,"abstract":"Edge computing (EC) extends the centralized cloud computing paradigm by bringing computation into close proximity to the end-users, to the edge of the network, and is a key enabler for applications requiring low latency such as augmented reality or content delivery. To make EC pervasive, the following challenges must be tackled: how to satisfy the growing demand for edge computing facilities, how to discover the nearby edge servers, and how to securely access them? In this paper, we present ExEC, an open framework where edge providers can offer their capacity and be discovered by application providers and end-users. ExEC aims at the unification of interaction between edge and cloud providers so that cloud providers can utilize services of third-party edge providers, and any willing entity can easily become an edge provider. In ExEC, the unfolding of initially cloud-deployed application towards edge happens without administrative intervention, since ExEC discovers available edge providers on the fly and monitors incoming end-user traffic, determining the near-optimal placement of edge services. ExEC is a set of loosely coupled components and common practices, allowing for custom implementations needed to embrace the diverse needs of specific EC scenarios. ExEC leverages only existing protocols and requires no modifications to the deployed infrastructure. Using real-world topology data and experiments on cloud platforms, we demonstrate the feasibility of ExEC and present results on its expected performance.","PeriodicalId":131097,"journal":{"name":"Proceedings of the 2nd International Workshop on Edge Systems, Analytics and Networking","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125990395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}