Aleksandra Knezevic, Quynh Nguyen, Jason A. Tran, Pradipta Ghosh, Pranav Sakulkar, B. Krishnamachari, M. Annavaram
CIRCE (Centralized Runtime sChedulEr) is a runtime scheduling software tool for dispersed computing. It can deploy pipelined computations described in the form of a Directed Acyclic Graph (DAG) on multiple geographically dispersed compute nodes at the edge and in the cloud. A key innovation in this scheduler compared to prior work is the incorporation of a run-time network profiler which accounts for the network performance among nodes when scheduling. This demo will show an implementation of CIRCE deployed on a testbed of tens of nodes, from both an edge computing testbed and a geographically distributed cloud, with real-time evaluation of the task processing performance of different scheduling algorithms.
{"title":"CIRCE - a runtime scheduler for DAG-based dispersed computing: demo","authors":"Aleksandra Knezevic, Quynh Nguyen, Jason A. Tran, Pradipta Ghosh, Pranav Sakulkar, B. Krishnamachari, M. Annavaram","doi":"10.1145/3132211.3132451","DOIUrl":"https://doi.org/10.1145/3132211.3132451","url":null,"abstract":"CIRCE (Centralized Runtime sChedulEr) is a runtime scheduling software tool for dispersed computing. It can deploy pipelined computations described in the form of a Directed Acyclic Graph (DAG) on multiple geographically dispersed compute nodes at the edge and in the cloud. A key innovation in this scheduler compared to prior work is the incorporation of a run-time network profiler which accounts for the network performance among nodes when scheduling. This demo will show an implementation of CIRCE deployed on a testbed of tens of nodes, from both an edge computing testbed and a geographically distributed cloud, with real-time evaluation of the task processing performance of different scheduling algorithms.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126738217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Supporting smooth movement of mobile clients is important when offloading services on an edge computing platform. Interruption-free client mobility demands seamless migration of the offloading service to nearby edge servers. However, fast migration of offloading services across edge servers in a WAN environment poses significant challenges to the handoff service design. In this paper, we present a novel service handoff system which seamlessly migrates offloading services to the nearest edge server, while the mobile client is moving. Service handoff is achieved via container migration. We identify an important performance problem during Docker container migration. Based on our systematic study of container layer management and image stacking, we propose a migration method which leverages the layered storage system to reduce file system synchronization overhead, without dependence on the distributed file system. We implement a prototype system and conduct experiments using real world product applications. Evaluation results reveal that compared to state-of-the-art service handoff systems designed for edge computing platforms, our system reduces the total duration of service handoff time by 80%(56%) with network bandwidth 5Mbps(20Mbps).
{"title":"Efficient service handoff across edge servers via docker container migration","authors":"Lele Ma, Shanhe Yi, Qun A. Li","doi":"10.1145/3132211.3134460","DOIUrl":"https://doi.org/10.1145/3132211.3134460","url":null,"abstract":"Supporting smooth movement of mobile clients is important when offloading services on an edge computing platform. Interruption-free client mobility demands seamless migration of the offloading service to nearby edge servers. However, fast migration of offloading services across edge servers in a WAN environment poses significant challenges to the handoff service design. In this paper, we present a novel service handoff system which seamlessly migrates offloading services to the nearest edge server, while the mobile client is moving. Service handoff is achieved via container migration. We identify an important performance problem during Docker container migration. Based on our systematic study of container layer management and image stacking, we propose a migration method which leverages the layered storage system to reduce file system synchronization overhead, without dependence on the distributed file system. We implement a prototype system and conduct experiments using real world product applications. Evaluation results reveal that compared to state-of-the-art service handoff systems designed for edge computing platforms, our system reduces the total duration of service handoff time by 80%(56%) with network bandwidth 5Mbps(20Mbps).","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125859690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kiryong Ha, Yoshihisa Abe, Thomas Eiszler, Zhuo Chen, Wenlu Hu, Brandon Amos, Rohit Upadhyaya, P. Pillai, M. Satyanarayanan
VM handoff enables rapid and transparent placement changes to executing code in edge computing use cases where the safety and management attributes of VM encapsulation are important. This versatile primitive offers the functionality of classic live migration but is highly optimized for the edge. Over WAN bandwidths ranging from 5 to 25 Mbps, VM handoff migrates a running 8 GB VM in about a minute, with a downtime of a few tens of seconds. By dynamically adapting to varying network bandwidth and processing load, VM handoff is more than an order of magnitude faster than live migration at those bandwidths.
{"title":"You can teach elephants to dance: agile VM handoff for edge computing","authors":"Kiryong Ha, Yoshihisa Abe, Thomas Eiszler, Zhuo Chen, Wenlu Hu, Brandon Amos, Rohit Upadhyaya, P. Pillai, M. Satyanarayanan","doi":"10.1145/3132211.3134453","DOIUrl":"https://doi.org/10.1145/3132211.3134453","url":null,"abstract":"VM handoff enables rapid and transparent placement changes to executing code in edge computing use cases where the safety and management attributes of VM encapsulation are important. This versatile primitive offers the functionality of classic live migration but is highly optimized for the edge. Over WAN bandwidths ranging from 5 to 25 Mbps, VM handoff migrates a running 8 GB VM in about a minute, with a downtime of a few tens of seconds. By dynamically adapting to varying network bandwidth and processing load, VM handoff is more than an order of magnitude faster than live migration at those bandwidths.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123457961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zheng Dong, Yuchuan Liu, Husheng Zhou, Xusheng Xiao, Y. Gu, Lingming Zhang, Cong Liu
As battery-powered embedded devices have limited computational capacity, computation offloading becomes a promising solution that selectively migrates computations to powerful remote severs. The driving problem that motivates this work is to leverage remote resources to facilitate the development of mobile augmented reality (AR) systems. Due to the (soft) timing predictability requirements of many AR-based computations (e.g., object recognition tasks require bounded response times), it is challenging to develop an offloading framework that jointly optimizes the two (somewhat conflicting) goals of achieving timing predictability and energy efficiency. This paper presents a comprehensive offloading and resource management framework for embedded systems, which aims to ensure predictable response time performance while minimizing energy consumption. We develop two offloading algorithms within the framework, which decide the task components that shall be offloaded so that both goals can be achieved simultaneously. We have fully implemented our framework on an Android smartphone platform. An in-depth evaluation using representative Android applications and benchmarks demonstrates that our proposed offloading framework dominates existing approaches in term of timing predictability (e.g., ours can support workloads with 100% more required CPU utilization), while effectively reducing energy consumption.
{"title":"An energy-efficient offloading framework with predictable temporal correctness","authors":"Zheng Dong, Yuchuan Liu, Husheng Zhou, Xusheng Xiao, Y. Gu, Lingming Zhang, Cong Liu","doi":"10.1145/3132211.3134448","DOIUrl":"https://doi.org/10.1145/3132211.3134448","url":null,"abstract":"As battery-powered embedded devices have limited computational capacity, computation offloading becomes a promising solution that selectively migrates computations to powerful remote severs. The driving problem that motivates this work is to leverage remote resources to facilitate the development of mobile augmented reality (AR) systems. Due to the (soft) timing predictability requirements of many AR-based computations (e.g., object recognition tasks require bounded response times), it is challenging to develop an offloading framework that jointly optimizes the two (somewhat conflicting) goals of achieving timing predictability and energy efficiency. This paper presents a comprehensive offloading and resource management framework for embedded systems, which aims to ensure predictable response time performance while minimizing energy consumption. We develop two offloading algorithms within the framework, which decide the task components that shall be offloaded so that both goals can be achieved simultaneously. We have fully implemented our framework on an Android smartphone platform. An in-depth evaluation using representative Android applications and benchmarks demonstrates that our proposed offloading framework dominates existing approaches in term of timing predictability (e.g., ours can support workloads with 100% more required CPU utilization), while effectively reducing energy consumption.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"236 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124761961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. H. Mortazavi, Mohammad Salehe, C. S. Gomes, Caleb Phillips, E. D. Lara
Path computing is a new paradigm that generalizes the edge computing vision into a multi-tier cloud architecture deployed over the geographic span of the network. Path computing supports scalable and localized processing by providing storage and computation along a succession of datacenters of increasing sizes, positioned between the client device and the traditional wide-area cloud data-center. CloudPath is a platform that implements the path computing paradigm. CloudPath consists of an execution environment that enables the dynamic installation of light-weight stateless event handlers, and a distributed eventual consistent storage system that replicates application data on-demand. CloudPath handlers are small, allowing them to be rapidly instantiated on demand on any server that runs the CloudPath execution framework. In turn, CloudPath automatically migrates application data across the multiple datacenter tiers to optimize access latency and reduce bandwidth consumption.
{"title":"Cloudpath: a multi-tier cloud computing framework","authors":"S. H. Mortazavi, Mohammad Salehe, C. S. Gomes, Caleb Phillips, E. D. Lara","doi":"10.1145/3132211.3134464","DOIUrl":"https://doi.org/10.1145/3132211.3134464","url":null,"abstract":"Path computing is a new paradigm that generalizes the edge computing vision into a multi-tier cloud architecture deployed over the geographic span of the network. Path computing supports scalable and localized processing by providing storage and computation along a succession of datacenters of increasing sizes, positioned between the client device and the traditional wide-area cloud data-center. CloudPath is a platform that implements the path computing paradigm. CloudPath consists of an execution environment that enables the dynamic installation of light-weight stateless event handlers, and a distributed eventual consistent storage system that replicates application data on-demand. CloudPath handlers are small, allowing them to be rapidly instantiated on demand on any server that runs the CloudPath execution framework. In turn, CloudPath automatically migrates application data across the multiple datacenter tiers to optimize access latency and reduce bandwidth consumption.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127611054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SEC is a premier symposium with a highly selective single-track technical program, dedicated to addressing the challenges in edge computing. SEC is orchestrated to provide a unique platform for researchers and practitioners to exchange ideas and demonstrate the most recent advances in research and development on edge computing.
{"title":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","authors":"Junshan Zhang, M. Chiang, B. Maggs","doi":"10.1145/3132211","DOIUrl":"https://doi.org/10.1145/3132211","url":null,"abstract":"SEC is a premier symposium with a highly selective single-track technical program, dedicated to addressing the challenges in edge computing. SEC is orchestrated to provide a unique platform for researchers and practitioners to exchange ideas and demonstrate the most recent advances in research and development on edge computing.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131084798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhe Huang, Bharath Balasubramanian, Azzam Alsudais, Kaustubh R. Joshi
Searching for a particular device in an ocean of devices is a perfect illustration of the idiom 'searching a needle in a haystack'. Yet the future IoT and edge computing platforms are facing an even more challenging problem because their mission-critical operations (e.g., application orchestration, device and application telemetry, inventory management) depend on their capability of identifying nodes of interest from potentially millions of service providers across the globe according to highly dynamic attributes such as geo-location information, bandwidth availability, real-time workload and so on. For example, a vehicular-based crowd sensing application that collects air quality data near an exit of a highway needs to locate cars in close proximity to the exit among millions of cars running on the road. In a business model where an enterprise offers a framework for clients to avail such edge/IoT services, we investigate the following problem: "among millions of IoT/Edge nodes, how do we locate and communicate with only those nodes that satisfy certain attributes, especially when some of these attributes change rapidly?" In this paper, we address this problem through the design of a scalable message broker based on the following novel intuition: device discovery should be a joint effort between a centrally managed enterprise-level system (high availability, low accuracy) and the fully decentralized edge (high accuracy, unpredictable availability). To elaborate, the enterprise can centrally maintain and manage the attributes of all the IoT devices. However, since millions of devices cannot constantly update their attribute information, central management has the issue of attribute staleness. Clearly the devices themselves have the most up-to-date information. However, it is not feasible for every request to be routed to million devices connected by unpredictable networks, where only some of them may possess the correct attributes. In this paper, we propose a message broker, in which requests for relatively static device attributes are handled by the centrally managed system, whereas, requests for dynamic attributes are handled by peer-to-peer networks of the edge devices containing those attributes. This combination provides a scalable solution wherein, based on client needs, we can obtain attribute values without compromising on freshness or performance. There exist several previous works that aim to tackle the device searching problem. Name-based networking solutions such as Intentional Naming System (INS) [1], Auspice [5], and global name service [3] propose to implement a centrally managed name resolution service. Devices periodically update their status information and descriptions in a push approach. While maintaining complete knowledge of every device in the network centrally makes the searching much easier, the excessive workload from millions of devices updating their status in a highly dynamic environment renders the scheme unsaleable.
{"title":"An edge-facilitated message broker for scalable device discovery: poster","authors":"Zhe Huang, Bharath Balasubramanian, Azzam Alsudais, Kaustubh R. Joshi","doi":"10.1145/3132211.3132456","DOIUrl":"https://doi.org/10.1145/3132211.3132456","url":null,"abstract":"Searching for a particular device in an ocean of devices is a perfect illustration of the idiom 'searching a needle in a haystack'. Yet the future IoT and edge computing platforms are facing an even more challenging problem because their mission-critical operations (e.g., application orchestration, device and application telemetry, inventory management) depend on their capability of identifying nodes of interest from potentially millions of service providers across the globe according to highly dynamic attributes such as geo-location information, bandwidth availability, real-time workload and so on. For example, a vehicular-based crowd sensing application that collects air quality data near an exit of a highway needs to locate cars in close proximity to the exit among millions of cars running on the road. In a business model where an enterprise offers a framework for clients to avail such edge/IoT services, we investigate the following problem: \"among millions of IoT/Edge nodes, how do we locate and communicate with only those nodes that satisfy certain attributes, especially when some of these attributes change rapidly?\" In this paper, we address this problem through the design of a scalable message broker based on the following novel intuition: device discovery should be a joint effort between a centrally managed enterprise-level system (high availability, low accuracy) and the fully decentralized edge (high accuracy, unpredictable availability). To elaborate, the enterprise can centrally maintain and manage the attributes of all the IoT devices. However, since millions of devices cannot constantly update their attribute information, central management has the issue of attribute staleness. Clearly the devices themselves have the most up-to-date information. However, it is not feasible for every request to be routed to million devices connected by unpredictable networks, where only some of them may possess the correct attributes. In this paper, we propose a message broker, in which requests for relatively static device attributes are handled by the centrally managed system, whereas, requests for dynamic attributes are handled by peer-to-peer networks of the edge devices containing those attributes. This combination provides a scalable solution wherein, based on client needs, we can obtain attribute values without compromising on freshness or performance. There exist several previous works that aim to tackle the device searching problem. Name-based networking solutions such as Intentional Naming System (INS) [1], Auspice [5], and global name service [3] propose to implement a centrally managed name resolution service. Devices periodically update their status information and descriptions in a push approach. While maintaining complete knowledge of every device in the network centrally makes the searching much easier, the excessive workload from millions of devices updating their status in a highly dynamic environment renders the scheme unsaleable. ","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132472468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Badri, Tayebeh Bahreini, Daniel Grosu, Kai Yang
Efficient service placement of mobile applications on the edge servers is one of the main challenges in Mobile Edge Computing (MEC). The service placement problem in MEC has to consider several issues that were not present in the data-center settings. After the initial service placement, mobile users may move to different locations which may increase the execution time or the cost of running the applications. In addition to this, the resource availability of servers may change over time. Therefore, an efficient service placement algorithm must be adaptive to this dynamic setting.
{"title":"Multi-stage stochastic programming for service placement in edge computing systems: poster","authors":"H. Badri, Tayebeh Bahreini, Daniel Grosu, Kai Yang","doi":"10.1145/3132211.3132461","DOIUrl":"https://doi.org/10.1145/3132211.3132461","url":null,"abstract":"Efficient service placement of mobile applications on the edge servers is one of the main challenges in Mobile Edge Computing (MEC). The service placement problem in MEC has to consider several issues that were not present in the data-center settings. After the initial service placement, mobile users may move to different locations which may increase the execution time or the cost of running the applications. In addition to this, the resource availability of servers may change over time. Therefore, an efficient service placement algorithm must be adaptive to this dynamic setting.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124103400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christopher Streiffer, Animesh Srivastava, Victor Orlikowski, Yesenia Velasco, Vincentius Martin, Nisarg Raval, Ashwin Machanavajjhala, Landon P. Cox
Edge computing offers resource-constrained devices low-latency access to high-performance computing infrastructure. In this paper, we present ePrivateEye, an implementation of PrivateEye that offloads computationally expensive computer-vision processing to an edge server. The original PrivateEye locally processed video frames on a mobile device and delivered approximately 20 fps, whereas ePrivateEye transfers frames to a remote server for processing. We present experimental results that utilize our campus Software-Defined Networking infrastructure to characterize how network-path latency, packet loss, and geographic distance impact offloading to the edge in ePrivateEye. We show that offloading video-frame analysis to an edge server at a metro-scale distance allows ePrivateEye to analyze more frames than PrivateEye's local processing over the same period to achieve realtime performance of 30 fps, with perfect precision and negligible impact on energy efficiency.
{"title":"ePrivateeye: to the edge and beyond!","authors":"Christopher Streiffer, Animesh Srivastava, Victor Orlikowski, Yesenia Velasco, Vincentius Martin, Nisarg Raval, Ashwin Machanavajjhala, Landon P. Cox","doi":"10.1145/3132211.3134457","DOIUrl":"https://doi.org/10.1145/3132211.3134457","url":null,"abstract":"Edge computing offers resource-constrained devices low-latency access to high-performance computing infrastructure. In this paper, we present ePrivateEye, an implementation of PrivateEye that offloads computationally expensive computer-vision processing to an edge server. The original PrivateEye locally processed video frames on a mobile device and delivered approximately 20 fps, whereas ePrivateEye transfers frames to a remote server for processing. We present experimental results that utilize our campus Software-Defined Networking infrastructure to characterize how network-path latency, packet loss, and geographic distance impact offloading to the edge in ePrivateEye. We show that offloading video-frame analysis to an edge server at a metro-scale distance allows ePrivateEye to analyze more frames than PrivateEye's local processing over the same period to achieve realtime performance of 30 fps, with perfect precision and negligible impact on energy efficiency.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121167451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Using cloud storage to automatically back up content changes when editing documents is an everyday scenario. We demonstrate that current cloud storage services can cause unnecessary bandwidth consumption, especially for office suite documents, in this common scenario. Specifically, even with incremental synchronization approach in place, existing cloud storage services still incur whole-file transmission every time when the document file is synchronized. We analyze the problem causes in depth, and propose EdgeCourier, a system to address the problem. We also propose the concept of edge-hosed personal service (EPS), which has many benefits, such as helping deploy EdgeCourier easily in practice. We have prototyped the EdgeCourier system, deployed it in the form of EPS in a lab environment, and performed extensive experiments for evaluation. Evaluation results suggest that our prototype system can effectively reduce document synchronization bandwidth with negligible overheads.
{"title":"Edgecourier: an edge-hosted personal service for low-bandwidth document synchronization in mobile cloud storage services","authors":"Pengzhan Hao, Yongshu Bai, Xin Zhang, Yifan Zhang","doi":"10.1145/3132211.3134447","DOIUrl":"https://doi.org/10.1145/3132211.3134447","url":null,"abstract":"Using cloud storage to automatically back up content changes when editing documents is an everyday scenario. We demonstrate that current cloud storage services can cause unnecessary bandwidth consumption, especially for office suite documents, in this common scenario. Specifically, even with incremental synchronization approach in place, existing cloud storage services still incur whole-file transmission every time when the document file is synchronized. We analyze the problem causes in depth, and propose EdgeCourier, a system to address the problem. We also propose the concept of edge-hosed personal service (EPS), which has many benefits, such as helping deploy EdgeCourier easily in practice. We have prototyped the EdgeCourier system, deployed it in the form of EPS in a lab environment, and performed extensive experiments for evaluation. Evaluation results suggest that our prototype system can effectively reduce document synchronization bandwidth with negligible overheads.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"157 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127229361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}