Zhuo Chen, Wenlu Hu, Junjue Wang, Siyan Zhao, Brandon Amos, Guanhang Wu, Kiryong Ha, Khalid Elgazzar, P. Pillai, R. Klatzky, D. Siewiorek, M. Satyanarayanan
An emerging class of interactive wearable cognitive assistance applications is poised to become one of the key demonstrators of edge computing infrastructure. In this paper, we design seven such applications and evaluate their performance in terms of latency across a range of edge computing configurations, mobile hardware, and wireless networks, including 4G LTE. We also devise a novel multi-algorithm approach that leverages temporal locality to reduce end-to-end latency by 60% to 70%, without sacrificing accuracy. Finally, we derive target latencies for our applications, and show that edge computing is crucial to meeting these targets.
{"title":"An empirical study of latency in an emerging class of edge computing applications for wearable cognitive assistance","authors":"Zhuo Chen, Wenlu Hu, Junjue Wang, Siyan Zhao, Brandon Amos, Guanhang Wu, Kiryong Ha, Khalid Elgazzar, P. Pillai, R. Klatzky, D. Siewiorek, M. Satyanarayanan","doi":"10.1145/3132211.3134458","DOIUrl":"https://doi.org/10.1145/3132211.3134458","url":null,"abstract":"An emerging class of interactive wearable cognitive assistance applications is poised to become one of the key demonstrators of edge computing infrastructure. In this paper, we design seven such applications and evaluate their performance in terms of latency across a range of edge computing configurations, mobile hardware, and wireless networks, including 4G LTE. We also devise a novel multi-algorithm approach that leverages temporal locality to reduce end-to-end latency by 60% to 70%, without sacrificing accuracy. Finally, we derive target latencies for our applications, and show that edge computing is crucial to meeting these targets.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124421028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Next generation 5G cellular networks is designed as a device-centric architecture, which provides not only human-based services but also machine-type communication (Internet of Things applications). To meet different kinds of next generation IoT service requirements such as automated driving, in this paper we discuss multiple automated vehicles, each of which owns processing and communication ability as a fog node and form into multiple groups as platoons to do pseudonym change procedure. Different from the traditional studies without platoon design concept, we consider "reaction time"- a practical platoon management factor which affecting intra-platoon spacing between the leading vehicle and the following vehicle, to tackle a platoon-aware pseudonym change problem for jointly achieving privacy gains and traffic efficiency among multiple platoons' cooperation at the edge.
{"title":"Privacy-preserving of platoon-based V2V in collaborative edge: poster abstract","authors":"Te-Chuan Chiu, Junshan Zhang, Ai-Chun Pang","doi":"10.1145/3132211.3132464","DOIUrl":"https://doi.org/10.1145/3132211.3132464","url":null,"abstract":"Next generation 5G cellular networks is designed as a device-centric architecture, which provides not only human-based services but also machine-type communication (Internet of Things applications). To meet different kinds of next generation IoT service requirements such as automated driving, in this paper we discuss multiple automated vehicles, each of which owns processing and communication ability as a fog node and form into multiple groups as platoons to do pseudonym change procedure. Different from the traditional studies without platoon design concept, we consider \"reaction time\"- a practical platoon management factor which affecting intra-platoon spacing between the leading vehicle and the following vehicle, to tackle a platoon-aware pseudonym change problem for jointly achieving privacy gains and traffic efficiency among multiple platoons' cooperation at the edge.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116054346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces Trellis --- a low-cost Wi-Fi-based in vehicle monitoring and tracking system that can passively observe mobile devices and provide various analytics about people both within and outside a vehicle which can lead to interesting population insights at a city scale. Our system runs on a vehicle-based edge computing platform and is a complementary mechanism which allows operators to collect various information, such as original-destination stations popular among passengers, occupancy of vehicles, pedestrian activity trends, and more. To conduct most of our analytics, we develop simple but effective algorithms that determine which device is actually inside (or outside) of a vehicle by leveraging some contextual information. While our current system does not provide accurate actual numbers of passengers and pedestrians, we expect the relative numbers and general trends to be fairly useful from an analytics perspective. We have deployed Trellis on a vehicle-based edge computing platform over a period of ten months, and have collected more than 30,000 miles of travel data spanning multiple bus routes. By combining our techniques, with bus schedule and weather information, we present a varied human mobility analysis across multiple aspects --- activity trends of passengers in transit systems; trends of pedestrians on city streets; and how external factors, e.g., temperature and weather, impact human outdoor activities. These observations demonstrate the usefulness of Trellis in proposed settings.
{"title":"A vehicle-based edge computing platform for transit and human mobility analytics","authors":"Bozhao Qi, Lei Kang, Suman Banerjee","doi":"10.1145/3132211.3134446","DOIUrl":"https://doi.org/10.1145/3132211.3134446","url":null,"abstract":"This paper introduces Trellis --- a low-cost Wi-Fi-based in vehicle monitoring and tracking system that can passively observe mobile devices and provide various analytics about people both within and outside a vehicle which can lead to interesting population insights at a city scale. Our system runs on a vehicle-based edge computing platform and is a complementary mechanism which allows operators to collect various information, such as original-destination stations popular among passengers, occupancy of vehicles, pedestrian activity trends, and more. To conduct most of our analytics, we develop simple but effective algorithms that determine which device is actually inside (or outside) of a vehicle by leveraging some contextual information. While our current system does not provide accurate actual numbers of passengers and pedestrians, we expect the relative numbers and general trends to be fairly useful from an analytics perspective. We have deployed Trellis on a vehicle-based edge computing platform over a period of ten months, and have collected more than 30,000 miles of travel data spanning multiple bus routes. By combining our techniques, with bus schedule and weather information, we present a varied human mobility analysis across multiple aspects --- activity trends of passengers in transit systems; trends of pedestrians on city streets; and how external factors, e.g., temperature and weather, impact human outdoor activities. These observations demonstrate the usefulness of Trellis in proposed settings.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122681711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vehicular applications must not demand too much of a driver's attention. They often run in the background and initiate interactions with the driver to deliver important information. We argue that the vehicular computing system must schedule interactions by considering their priority, the attention they will demand, and how much attention the driver currently has to spare. Based on these considerations, it should either allow a given interaction or defer it. We describe a prototype called Gremlin that leverages edge computing infrastructure to help schedule interactions initiated by vehicular applications. It continuously performs four tasks: (1) monitoring driving conditions to estimate the driver's available attention, (2) recording interactions for analysis, (3) generating a user-specific quantitative model of the attention required for each distinct interaction, and (4) scheduling new interactions based on the above data. Gremlin performs the third task on edge computing infrastructure. Offload is attractive because the analysis is too computationally demanding to run on vehicular platforms. Since recording size for each interaction can be large, it is preferable to perform the offloaded computation at the edge of the network rather than in the cloud, and thereby conserve wide-area network bandwidth. We evaluate Gremlin by comparing its decisions to those recommended by a vehicular UI expert. Gremlin's decisions agree with the expert's over 90% of the time, much more frequently than the coarse-grained scheduling policies used by current vehicle systems. Further, we find that offloading of analysis to edge platforms reduces use of wide-area networks by an average of 15MB per analyzed interaction.
{"title":"Gremlin: scheduling interactions in vehicular computing","authors":"Kyungmin Lee, J. Flinn, Brian D. Noble","doi":"10.1145/3132211.3134450","DOIUrl":"https://doi.org/10.1145/3132211.3134450","url":null,"abstract":"Vehicular applications must not demand too much of a driver's attention. They often run in the background and initiate interactions with the driver to deliver important information. We argue that the vehicular computing system must schedule interactions by considering their priority, the attention they will demand, and how much attention the driver currently has to spare. Based on these considerations, it should either allow a given interaction or defer it. We describe a prototype called Gremlin that leverages edge computing infrastructure to help schedule interactions initiated by vehicular applications. It continuously performs four tasks: (1) monitoring driving conditions to estimate the driver's available attention, (2) recording interactions for analysis, (3) generating a user-specific quantitative model of the attention required for each distinct interaction, and (4) scheduling new interactions based on the above data. Gremlin performs the third task on edge computing infrastructure. Offload is attractive because the analysis is too computationally demanding to run on vehicular platforms. Since recording size for each interaction can be large, it is preferable to perform the offloaded computation at the edge of the network rather than in the cloud, and thereby conserve wide-area network bandwidth. We evaluate Gremlin by comparing its decisions to those recommended by a vehicular UI expert. Gremlin's decisions agree with the expert's over 90% of the time, much more frequently than the coarse-grained scheduling policies used by current vehicle systems. Further, we find that offloading of analysis to edge platforms reduces use of wide-area networks by an average of 15MB per analyzed interaction.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129630483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gorkem Kar, Shubham Jain, M. Gruteser, Jinzhu Chen, F. Bai, R. Govindan
This paper explores the minimal dataset necessary at vehicular edge nodes, to effectively differentiate drivers using data from existing in-vehicle sensors. This facilitates novel personalization, insurance, advertising, and security applications but can also help in understanding the privacy sensitivity of such data. Existing work on differentiating drivers largely relies on devices that drivers carry, or on the locations that drivers visit to distinguish drivers. Internally, however, the vehicle processes a much richer set of sensor information that is becoming increasingly available to external services. To explore how easily drivers can be distinguished from such data, we consider a system that interfaces to the vehicle bus and executes supervised or unsupervised driver differentiation techniques on this data. To facilitate this analysis and to evaluate the system, we collect in-vehicle data from 24 drivers on a controlled campus test route, as well as 480 trips over three weeks from five shared university mail vans. We also conduct studies between members of a family. The results show that driver differentiation does not require longer sequences of driving telemetry data but can be accomplished with 91% accuracy within 20s after the driver enters the vehicle, usually even before the vehicle starts moving.
{"title":"PredriveID: pre-trip driver identification from in-vehicle data","authors":"Gorkem Kar, Shubham Jain, M. Gruteser, Jinzhu Chen, F. Bai, R. Govindan","doi":"10.1145/3132211.3134462","DOIUrl":"https://doi.org/10.1145/3132211.3134462","url":null,"abstract":"This paper explores the minimal dataset necessary at vehicular edge nodes, to effectively differentiate drivers using data from existing in-vehicle sensors. This facilitates novel personalization, insurance, advertising, and security applications but can also help in understanding the privacy sensitivity of such data. Existing work on differentiating drivers largely relies on devices that drivers carry, or on the locations that drivers visit to distinguish drivers. Internally, however, the vehicle processes a much richer set of sensor information that is becoming increasingly available to external services. To explore how easily drivers can be distinguished from such data, we consider a system that interfaces to the vehicle bus and executes supervised or unsupervised driver differentiation techniques on this data. To facilitate this analysis and to evaluate the system, we collect in-vehicle data from 24 drivers on a controlled campus test route, as well as 480 trips over three weeks from five shared university mail vans. We also conduct studies between members of a family. The results show that driver differentiation does not require longer sequences of driving telemetry data but can be accomplished with 91% accuracy within 20s after the driver enters the vehicle, usually even before the vehicle starts moving.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"241 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116053456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wuyang Zhang, Jiachen Chen, Yanyong Zhang, D. Raychaudhuri
With the popularity of Massively Multiplayer Online Games (MMOGs) and Virtual Reality (VR) technologies, VR-MMOGs are developing quickly, demanding ever faster gaming interactions and image rendering. In this paper, we identify three main challenges of VR-MMOGs: (1)a stringent latency requirement for frequent local view change responses, (2) a high bandwidth requirement for constant refreshing, and (3)a large scale requirement for a large number of simultaneous players. Understanding that a cloud-centric gaming architecture may struggle to deliver the latency/bandwidth requirements, the game development community is attempting to leverage edge cloud computing. However, one problem remains unsolved: how to distribute the work among the user device, the edge clouds, and the center cloud to meet all three requirements especially when users are mobile. In this paper, we propose a hybrid gaming architecture that achieves clever work distribution. It places local view change updates on edge clouds for immediate responses, frame rendering on edge clouds for high bandwidth, and global game state updates on the center cloud for user scalability. In addition, we propose an efficient service placement algorithm based on a Markov decision process. This algorithm dynamically places a user's gaming service on edge clouds while the user moves through different access points. It also co-places multiple users to facilitate game world sharing and reduce the overall migration overhead. We derive optimal solutions and devise efficient heuristic approaches. We also study different algorithm implementations to speed up the runtime. Through detailed simulation studies, we validate our placement algorithms and also show that our architecture has the potential to meet all three requirements of VR-MMOGs.
{"title":"Towards efficient edge cloud augmentation for virtual reality MMOGs","authors":"Wuyang Zhang, Jiachen Chen, Yanyong Zhang, D. Raychaudhuri","doi":"10.1145/3132211.3134463","DOIUrl":"https://doi.org/10.1145/3132211.3134463","url":null,"abstract":"With the popularity of Massively Multiplayer Online Games (MMOGs) and Virtual Reality (VR) technologies, VR-MMOGs are developing quickly, demanding ever faster gaming interactions and image rendering. In this paper, we identify three main challenges of VR-MMOGs: (1)a stringent latency requirement for frequent local view change responses, (2) a high bandwidth requirement for constant refreshing, and (3)a large scale requirement for a large number of simultaneous players. Understanding that a cloud-centric gaming architecture may struggle to deliver the latency/bandwidth requirements, the game development community is attempting to leverage edge cloud computing. However, one problem remains unsolved: how to distribute the work among the user device, the edge clouds, and the center cloud to meet all three requirements especially when users are mobile. In this paper, we propose a hybrid gaming architecture that achieves clever work distribution. It places local view change updates on edge clouds for immediate responses, frame rendering on edge clouds for high bandwidth, and global game state updates on the center cloud for user scalability. In addition, we propose an efficient service placement algorithm based on a Markov decision process. This algorithm dynamically places a user's gaming service on edge clouds while the user moves through different access points. It also co-places multiple users to facilitate game world sharing and reduce the overall migration overhead. We derive optimal solutions and devise efficient heuristic approaches. We also study different algorithm implementations to speed up the runtime. Through detailed simulation studies, we validate our placement algorithms and also show that our architecture has the potential to meet all three requirements of VR-MMOGs.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129330383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The mobile edge computing platform [2, 3] generally measured the delay based on the total time required to offload the services of edge devices to edge servers. On the other hand, the service delay is also depended on the response time of edge services from different mobile applications. Hence, there exists a disparity between the actual and measured service delay experienced by the mobile edge devices. This type problem arises due to the fact that the wireless radio network infrastructure is shared by the multiple mobile devices. As a result, the congestion occurs in the network and the arrival packets are likely to be dropped by the network switches in the access network. In such situation, the real-time critical edge applications mostly care about the quality-of-service (QoS) for different mobile devices, while offloading the edge services to edge servers. Thus, we improve the QoS of edge devices using delay-agnostic service offloading scheme to meet the offloading requirements of those delay-agnostic applications. Prior works have devoted to filling the gap at edge network by introducing the priority-based computational offloading [1] or energy-efficient resource allocation [4] schemes for edge computing applications. However, the priority-based service offloading and resource allocation do not provide the fair QoS to mobile devices, as the actual bottleneck is usually existed in the delay-agnostic nature of mobile applications. Therefore, although the existing offloading schemes perform well for multi-modal applications in terms of energy efficiency and computational overhead [1]. However, in those works, they assumed that the delay requirement of services to be fixed, but in real-life the delay requirement of services may vary radically. This type of situation is considered to be delay-agnostic property for edge devices, in this poster. Looking at the above points, we propose DESERVE, which introduces a delay-agnostic service offloading scheme for mobile edge computing platform in order to improve the QoS of each individual. The overview of DESERVE is depicted in Figure 1. For the edge devices, the flexible and optimal resource allocation technique is exploited for delay-agnostic service offloading scheme in edge computing platform. The resource allocation technique is leveraged for edge devices using the advanced techniques of software defined networks (SDN). However, an adaptive service identifier is deployed specifically for the identification of critical edge applications at the edge of mobile edge platform, instead of installing centralized SDN controller. After the identification of critical edge services, the identified services are forwarded to the controller and the corresponding rules to be offloaded for those services in the edge servers of edge platform.
{"title":"DeServE: delay-agnostic service offloading in mobile edge clouds: poster","authors":"Amit Samanta, Yong Li","doi":"10.1145/3132211.3132454","DOIUrl":"https://doi.org/10.1145/3132211.3132454","url":null,"abstract":"The mobile edge computing platform [2, 3] generally measured the delay based on the total time required to offload the services of edge devices to edge servers. On the other hand, the service delay is also depended on the response time of edge services from different mobile applications. Hence, there exists a disparity between the actual and measured service delay experienced by the mobile edge devices. This type problem arises due to the fact that the wireless radio network infrastructure is shared by the multiple mobile devices. As a result, the congestion occurs in the network and the arrival packets are likely to be dropped by the network switches in the access network. In such situation, the real-time critical edge applications mostly care about the quality-of-service (QoS) for different mobile devices, while offloading the edge services to edge servers. Thus, we improve the QoS of edge devices using delay-agnostic service offloading scheme to meet the offloading requirements of those delay-agnostic applications. Prior works have devoted to filling the gap at edge network by introducing the priority-based computational offloading [1] or energy-efficient resource allocation [4] schemes for edge computing applications. However, the priority-based service offloading and resource allocation do not provide the fair QoS to mobile devices, as the actual bottleneck is usually existed in the delay-agnostic nature of mobile applications. Therefore, although the existing offloading schemes perform well for multi-modal applications in terms of energy efficiency and computational overhead [1]. However, in those works, they assumed that the delay requirement of services to be fixed, but in real-life the delay requirement of services may vary radically. This type of situation is considered to be delay-agnostic property for edge devices, in this poster. Looking at the above points, we propose DESERVE, which introduces a delay-agnostic service offloading scheme for mobile edge computing platform in order to improve the QoS of each individual. The overview of DESERVE is depicted in Figure 1. For the edge devices, the flexible and optimal resource allocation technique is exploited for delay-agnostic service offloading scheme in edge computing platform. The resource allocation technique is leveraged for edge devices using the advanced techniques of software defined networks (SDN). However, an adaptive service identifier is deployed specifically for the identification of critical edge applications at the edge of mobile edge platform, instead of installing centralized SDN controller. After the identification of critical edge services, the identified services are forwarded to the controller and the corresponding rules to be offloaded for those services in the edge servers of edge platform.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125643595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lucas Chaufournier, Prateek Sharma, Franck Le, E. Nahum, P. Shenoy, D. Towsley
Edge clouds are emerging as a popular paradigm of computation. In edge clouds, computation and storage can be distributed across a large number of locations, allowing applications to be hosted at the edge of the network close to the end-users. Virtual machine live migration is a key mechanism which enables applications to be nimble and nomadic as they respond to changing user locations and workload. However, VM live migration in edge clouds poses a number of challenges. Migrating VMs between geographically separate locations over slow wide-area network links results in large migration times and high unavailability of the application. This is due to network reconfiguration delays as user traffic is redirected to the newly migrated location. In this paper, we propose the use of multi-path TCP to both improve VM migration time and network transparency of applications. We evaluate our approach in a commercial public cloud environment and an emulated lab based edge cloud testbed using a variety of network conditions and show that our approach can reduce migration times by up to 2X while virtually eliminating downtimes for most applications.
{"title":"Fast transparent virtual machine migration in distributed edge clouds","authors":"Lucas Chaufournier, Prateek Sharma, Franck Le, E. Nahum, P. Shenoy, D. Towsley","doi":"10.1145/3132211.3134445","DOIUrl":"https://doi.org/10.1145/3132211.3134445","url":null,"abstract":"Edge clouds are emerging as a popular paradigm of computation. In edge clouds, computation and storage can be distributed across a large number of locations, allowing applications to be hosted at the edge of the network close to the end-users. Virtual machine live migration is a key mechanism which enables applications to be nimble and nomadic as they respond to changing user locations and workload. However, VM live migration in edge clouds poses a number of challenges. Migrating VMs between geographically separate locations over slow wide-area network links results in large migration times and high unavailability of the application. This is due to network reconfiguration delays as user traffic is redirected to the newly migrated location. In this paper, we propose the use of multi-path TCP to both improve VM migration time and network transparency of applications. We evaluate our approach in a commercial public cloud environment and an emulated lab based edge cloud testbed using a variety of network conditions and show that our approach can reduce migration times by up to 2X while virtually eliminating downtimes for most applications.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121367926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present the design and implementation of ParkMaster, a system that leverages the ubiquitous smartphone to help drivers find parking spaces in the urban environment. ParkMaster estimates parking space availability using video gleaned from drivers' dash-mounted smartphones on the network's edge, uploading analytics about the street to the cloud in real time as participants drive. Novel lightweight parked-car localization algorithms enable the system to estimate each parked car's approximate location by fusing information from phone's camera, GPS, and inertial sensors, tracking and counting parked cars as they move through the driving car's camera frame of view. To visually calibrate the system, ParkMaster relies only on the size of well-known objects in the urban environment for on-the-go calibration. We implement and deploy ParkMaster on Android smartphones, uploading parking analytics to the Azure cloud. On-the-road experiments in three different environments comprising Los Angeles, Paris and an Italian village measure the end-to-end accuracy of the system's parking estimates (close to 90%) as well as the amount of cellular data usage the system requires (less than one mega-byte per hour). Drill-down microbenchmarks then analyze the factors contributing to this end-to-end performance, as video resolution, vision algorithm parameters, and CPU resources.
{"title":"Parkmaster: an in-vehicle, edge-based video analytics service for detecting open parking spaces in urban environments","authors":"Giulio Grassi, K. Jamieson, P. Bahl, G. Pau","doi":"10.1145/3132211.3134452","DOIUrl":"https://doi.org/10.1145/3132211.3134452","url":null,"abstract":"We present the design and implementation of ParkMaster, a system that leverages the ubiquitous smartphone to help drivers find parking spaces in the urban environment. ParkMaster estimates parking space availability using video gleaned from drivers' dash-mounted smartphones on the network's edge, uploading analytics about the street to the cloud in real time as participants drive. Novel lightweight parked-car localization algorithms enable the system to estimate each parked car's approximate location by fusing information from phone's camera, GPS, and inertial sensors, tracking and counting parked cars as they move through the driving car's camera frame of view. To visually calibrate the system, ParkMaster relies only on the size of well-known objects in the urban environment for on-the-go calibration. We implement and deploy ParkMaster on Android smartphones, uploading parking analytics to the Azure cloud. On-the-road experiments in three different environments comprising Los Angeles, Paris and an Italian village measure the end-to-end accuracy of the system's parking estimates (close to 90%) as well as the amount of cellular data usage the system requires (less than one mega-byte per hour). Drill-down microbenchmarks then analyze the factors contributing to this end-to-end performance, as video resolution, vision algorithm parameters, and CPU resources.","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115502937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gorkem Kar, Shubham Jain, M. Gruteser, F. Bai, R. Govindan
Traffic estimation has been a long-studied problem, but prior work has mostly provided coarse estimates over large areas. This work proposes effective fine-grained traffic volume estimation using in-vehicle dashboard mounted cameras. Existing work on traffic estimation relies on static traffic cameras that are usually deployed at crowded intersections and at some traffic lights. For streets with no traffic cameras, some well-known navigation apps (e.g., Google Maps, Waze) are often used to get the traffic information but these applications depend on limited number of GPS traces to estimate speed, and therefore may not show the average speed experienced by every vehicle. Moreover, they do not give any information about the number of vehicles traveling on the road. In this work, we focus on harvesting vehicles as edge compute nodes, focusing on sensing and interpretation of traffic from live video streams. With this goal, we consider a system that uses the dash-cam video collected on a drive, and executes object detection and identification techniques on this data to detect and count vehicles. We use image processing techniques to estimate the lane of traveling and speed of vehicles in real-time. To evaluate this system, we recorded several trips on a major highway and a university road. The results show that vehicle count accuracy depends on traffic conditions heavily but even during the peak hours, we achieve more than 90% counting accuracy for the vehicles traveling in the left most lane. For the detected vehicles, results show that our speed estimation gives less than 10% error across diverse roads and traffic conditions, and over 91% lane estimation accuracy for vehicles traveling in the left most lane (i.e., the passing lane).
{"title":"Real-time traffic estimation at vehicular edge nodes","authors":"Gorkem Kar, Shubham Jain, M. Gruteser, F. Bai, R. Govindan","doi":"10.1145/3132211.3134461","DOIUrl":"https://doi.org/10.1145/3132211.3134461","url":null,"abstract":"Traffic estimation has been a long-studied problem, but prior work has mostly provided coarse estimates over large areas. This work proposes effective fine-grained traffic volume estimation using in-vehicle dashboard mounted cameras. Existing work on traffic estimation relies on static traffic cameras that are usually deployed at crowded intersections and at some traffic lights. For streets with no traffic cameras, some well-known navigation apps (e.g., Google Maps, Waze) are often used to get the traffic information but these applications depend on limited number of GPS traces to estimate speed, and therefore may not show the average speed experienced by every vehicle. Moreover, they do not give any information about the number of vehicles traveling on the road. In this work, we focus on harvesting vehicles as edge compute nodes, focusing on sensing and interpretation of traffic from live video streams. With this goal, we consider a system that uses the dash-cam video collected on a drive, and executes object detection and identification techniques on this data to detect and count vehicles. We use image processing techniques to estimate the lane of traveling and speed of vehicles in real-time. To evaluate this system, we recorded several trips on a major highway and a university road. The results show that vehicle count accuracy depends on traffic conditions heavily but even during the peak hours, we achieve more than 90% counting accuracy for the vehicles traveling in the left most lane. For the detected vehicles, results show that our speed estimation gives less than 10% error across diverse roads and traffic conditions, and over 91% lane estimation accuracy for vehicles traveling in the left most lane (i.e., the passing lane).","PeriodicalId":389022,"journal":{"name":"Proceedings of the Second ACM/IEEE Symposium on Edge Computing","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128925042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}