With the explosion of intelligent and latency-sensitive applications such as AR/VR, remote health and autonomous driving, mobile edge computing (MEC) has emerged as a promising solution to mitigate the high end-to-end latency of mobile cloud computing (MCC). However, the edge servers have significantly less computing capability compared to the resourceful central cloud. Therefore, a collaborative cloud-edge-local offloading scheme is necessary to accommodate both computationally intensive and latency-sensitive mobile applications. The coexistence of central cloud, edge servers and the mobile device (MD), forming a multi-tiered heterogeneous architecture, makes the optimal application deployment very chal-lenging especially for multi-component applications with component dependencies. This paper addresses the problem of energy and latency efficient application offloading in a collaborative cloud-edge-local environment. We formulate a multi-objective mixed integer linear program (MILP) with the goal of minimizing the system-wide energy consumption and application end-to-end latency. An approximation algorithm based on LP relaxation and rounding is proposed to address the time complexity. We demonstrate that our approach outperforms existing strategies in terms of application request acceptance ratio, latency and system energy consumption. CCS CONCEPTS • Networks → Network resources allocation; Cloud computing.
{"title":"Collaborative Cloud-Edge-Local Computation Offloading for Multi-Component Applications","authors":"Anousheh Gholami, J. Baras","doi":"10.1145/3453142.3493515","DOIUrl":"https://doi.org/10.1145/3453142.3493515","url":null,"abstract":"With the explosion of intelligent and latency-sensitive applications such as AR/VR, remote health and autonomous driving, mobile edge computing (MEC) has emerged as a promising solution to mitigate the high end-to-end latency of mobile cloud computing (MCC). However, the edge servers have significantly less computing capability compared to the resourceful central cloud. Therefore, a collaborative cloud-edge-local offloading scheme is necessary to accommodate both computationally intensive and latency-sensitive mobile applications. The coexistence of central cloud, edge servers and the mobile device (MD), forming a multi-tiered heterogeneous architecture, makes the optimal application deployment very chal-lenging especially for multi-component applications with component dependencies. This paper addresses the problem of energy and latency efficient application offloading in a collaborative cloud-edge-local environment. We formulate a multi-objective mixed integer linear program (MILP) with the goal of minimizing the system-wide energy consumption and application end-to-end latency. An approximation algorithm based on LP relaxation and rounding is proposed to address the time complexity. We demonstrate that our approach outperforms existing strategies in terms of application request acceptance ratio, latency and system energy consumption. CCS CONCEPTS • Networks → Network resources allocation; Cloud computing.","PeriodicalId":6779,"journal":{"name":"2021 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"48 1","pages":"361-365"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74349736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Internet traffic load is not uniformly distributed through the day; it is significantly higher during peak-periods, and comparatively idle during off-peak periods. In this context, we present CacheFlix, a time-shifted edge-caching solution that prefetches Netflix content during off-peak periods of network connectivity. We specifically focus on Netflix since it contributes to the largest percentage of global Internet traffic by a single application. We analyze a real-world dataset of Netflix viewing activity that we collected from 1060 users spanning a 1-year period and comprised of over 2.2 million Netflix TV shows and documentary series; we restrict the scope of our study to Netflix series that account for 65% of a typical user's Netflix load in terms of bytes fetched. We present insights on users' viewing behavior, and develop an accurate and efficient prediction algorithm using LSTM networks that caches episodes of Netflix series on storage constrained edge nodes, based on the user's past viewing activity. We evaluate CacheFlix on the collected dataset over various cache eviction policies, and find that CacheFlix is able to shift 70% of Netflix series traffic to off-peak hours.
{"title":"Will They or Won't They?: Toward Effective Prediction of Watch Behavior for Time-Shifted Edge-Caching of Netflix Series Videos","authors":"Shruti Lall, Raghupathy Sivakumar","doi":"10.1145/3453142.3493504","DOIUrl":"https://doi.org/10.1145/3453142.3493504","url":null,"abstract":"Internet traffic load is not uniformly distributed through the day; it is significantly higher during peak-periods, and comparatively idle during off-peak periods. In this context, we present CacheFlix, a time-shifted edge-caching solution that prefetches Netflix content during off-peak periods of network connectivity. We specifically focus on Netflix since it contributes to the largest percentage of global Internet traffic by a single application. We analyze a real-world dataset of Netflix viewing activity that we collected from 1060 users spanning a 1-year period and comprised of over 2.2 million Netflix TV shows and documentary series; we restrict the scope of our study to Netflix series that account for 65% of a typical user's Netflix load in terms of bytes fetched. We present insights on users' viewing behavior, and develop an accurate and efficient prediction algorithm using LSTM networks that caches episodes of Netflix series on storage constrained edge nodes, based on the user's past viewing activity. We evaluate CacheFlix on the collected dataset over various cache eviction policies, and find that CacheFlix is able to shift 70% of Netflix series traffic to off-peak hours.","PeriodicalId":6779,"journal":{"name":"2021 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"7 1","pages":"257-270"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80107925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Legacy machine learning solutions collect user data from data sources and place computation tasks in the Cloud. Such solutions eat communication capacity and compromise privacy with possible sensitive user data leakage. These concerns are resolved by Fog computing that integrates computation and communication in Fog nodes at the edge of the network enabling and pushing intelligence closer to the machines and devices. However, pushing computational tasks to the edge of the network requires high-end Fog nodes with powerful computation resources. This paper proposes a method whose computation tasks are decomposed and distributed among all the available resources. The more resource-demanding computation is placed in the Cloud, and the remainder is mapped to the Fog nodes using migration mechanisms in Fog computing platforms. Our presented method makes use of all available resources in a Fog computing platform while protecting user privacy. Furthermore, the proposed method optimizes the network traffic such that the high-critical applications running on the Fog nodes are not negatively impacted. We have implemented the (deep) neural networks - using our proposed method and evaluated the method on MNIST and CIFAR100 as the data source for the test cases. The results show advantages of our proposed method comparing to other methods, i.e., Cloud computing and Federated Learning, with better data protection and resource utilization.
{"title":"A Decomposed Deep Training Solution for Fog Computing Platforms","authors":"Jia Qian, M. Barzegaran","doi":"10.1145/3453142.3493509","DOIUrl":"https://doi.org/10.1145/3453142.3493509","url":null,"abstract":"Legacy machine learning solutions collect user data from data sources and place computation tasks in the Cloud. Such solutions eat communication capacity and compromise privacy with possible sensitive user data leakage. These concerns are resolved by Fog computing that integrates computation and communication in Fog nodes at the edge of the network enabling and pushing intelligence closer to the machines and devices. However, pushing computational tasks to the edge of the network requires high-end Fog nodes with powerful computation resources. This paper proposes a method whose computation tasks are decomposed and distributed among all the available resources. The more resource-demanding computation is placed in the Cloud, and the remainder is mapped to the Fog nodes using migration mechanisms in Fog computing platforms. Our presented method makes use of all available resources in a Fog computing platform while protecting user privacy. Furthermore, the proposed method optimizes the network traffic such that the high-critical applications running on the Fog nodes are not negatively impacted. We have implemented the (deep) neural networks - using our proposed method and evaluated the method on MNIST and CIFAR100 as the data source for the test cases. The results show advantages of our proposed method comparing to other methods, i.e., Cloud computing and Federated Learning, with better data protection and resource utilization.","PeriodicalId":6779,"journal":{"name":"2021 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"95 1","pages":"423-431"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79458006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hassan Halawa, Hazem A. Abdelhafez, M. O. Ahmed, K. Pattabiraman, M. Ripeanu
A common feature of devices deployed at the edge today is their configurability. The NVIDIA Jetson AGX, for example, has a user-configurable frequency range larger than one order of magnitude for the CPU, the GPU, and the memory controller. Key to make effective use of this configurability is the ability to anticipate the application-level impact of a frequency configuration choice. To this end, this paper presents a novel modeling approach for predicting the runtime and power consumption for convolutional neural net-works (CNNs). This modeling approach is: (i) effective - i.e., makes predictions with low error (models achieve an average relative error of 15.4% for runtime and 14.9% for energy); (ii) efficient - i.e., has a low cost to make predictions; (iii) generic - i.e., supports deploying updated and possibly different deep learning inference models without the need for retraining, and (iv) practical - i.e., requires a low training cost. Three features, all geared towards meeting the challenges of deploying in a real-world environment, set this work apart: (i) the focus on predicting the impact of the frequency configuration choice, (ii) the methodological choice to aggregate predictions at fine (i.e., kernel level) granularity which provides generality; and (iii) taking into account the inter-node variability among nominally identical devices.
{"title":"MIRAGE: Machine Learning-based Modeling of Identical Replicas of the Jetson AGX Embedded Platform","authors":"Hassan Halawa, Hazem A. Abdelhafez, M. O. Ahmed, K. Pattabiraman, M. Ripeanu","doi":"10.1145/3453142.3491284","DOIUrl":"https://doi.org/10.1145/3453142.3491284","url":null,"abstract":"A common feature of devices deployed at the edge today is their configurability. The NVIDIA Jetson AGX, for example, has a user-configurable frequency range larger than one order of magnitude for the CPU, the GPU, and the memory controller. Key to make effective use of this configurability is the ability to anticipate the application-level impact of a frequency configuration choice. To this end, this paper presents a novel modeling approach for predicting the runtime and power consumption for convolutional neural net-works (CNNs). This modeling approach is: (i) effective - i.e., makes predictions with low error (models achieve an average relative error of 15.4% for runtime and 14.9% for energy); (ii) efficient - i.e., has a low cost to make predictions; (iii) generic - i.e., supports deploying updated and possibly different deep learning inference models without the need for retraining, and (iv) practical - i.e., requires a low training cost. Three features, all geared towards meeting the challenges of deploying in a real-world environment, set this work apart: (i) the focus on predicting the impact of the frequency configuration choice, (ii) the methodological choice to aggregate predictions at fine (i.e., kernel level) granularity which provides generality; and (iii) taking into account the inter-node variability among nominally identical devices.","PeriodicalId":6779,"journal":{"name":"2021 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"1 1","pages":"26-40"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89523612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robert Gazda, Michel Roy, J. Blakley, Aly Sakr, Rolf Schuster
Edge computing brings resources nearer to end users and devices. Edge resources are heterogeneous and dynamic, presenting unique and competing challenges to researchers, network designers, and application developers. To meet these challenges, there is a critical ecosystem need for edge emulation capabilities. Several edge emulators exist however, most do not fully satisfy the needs of edge's various stakeholders. We present AdvantEDGE, an open mobile edge emulator that is feature rich while remaining flexible. AdvantEDGE enables diverse stakeholders to explore their respective disciplines while interacting with each other. In this paper, we summarize existing edge emulators, we present missing requirements and how they are fulfilled by AdvantEDGE and finally, we present research examples that were enabled via the use of the AdvantEDGE.
{"title":"Towards Open and Cross Domain Edge Emulation – The AdvantEDGE Platform","authors":"Robert Gazda, Michel Roy, J. Blakley, Aly Sakr, Rolf Schuster","doi":"10.1145/3453142.3493518","DOIUrl":"https://doi.org/10.1145/3453142.3493518","url":null,"abstract":"Edge computing brings resources nearer to end users and devices. Edge resources are heterogeneous and dynamic, presenting unique and competing challenges to researchers, network designers, and application developers. To meet these challenges, there is a critical ecosystem need for edge emulation capabilities. Several edge emulators exist however, most do not fully satisfy the needs of edge's various stakeholders. We present AdvantEDGE, an open mobile edge emulator that is feature rich while remaining flexible. AdvantEDGE enables diverse stakeholders to explore their respective disciplines while interacting with each other. In this paper, we summarize existing edge emulators, we present missing requirements and how they are fulfilled by AdvantEDGE and finally, we present research examples that were enabled via the use of the AdvantEDGE.","PeriodicalId":6779,"journal":{"name":"2021 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"63 1","pages":"339-344"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91251081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper studies learning-based edge computing and communication in a dynamic Air-to-Air Ad-hoc Network (AAAN). Due to spectrum scarcity, we assume the number of Air-to-Air (A2A) communication links is greater than that of the available frequency channels, such that some communication links have to share the same channel, causing co-channel interference. We formulate the joint channel selection and power control optimization problem to maximize the aggregate spectrum utilization efficiency under resource and fairness constraints. A distributed deep Q learning-based edge computing and communication algorithm is proposed to find the optimal solution. In particular, we design two different neural network structures and each communication link can converge to the optimal operation by exploiting only the local information from its neighbors, making it scalable to large networks. Finally, experimental results demonstrate the effectiveness of the proposed solution in various AAAN scenarios.
{"title":"Learning Based Edge Computing in Air-to-Air Communication Network","authors":"Zhe Wang, Hongxiang Li, E. Knoblock, R. Apaza","doi":"10.1145/3453142.3491417","DOIUrl":"https://doi.org/10.1145/3453142.3491417","url":null,"abstract":"This paper studies learning-based edge computing and communication in a dynamic Air-to-Air Ad-hoc Network (AAAN). Due to spectrum scarcity, we assume the number of Air-to-Air (A2A) communication links is greater than that of the available frequency channels, such that some communication links have to share the same channel, causing co-channel interference. We formulate the joint channel selection and power control optimization problem to maximize the aggregate spectrum utilization efficiency under resource and fairness constraints. A distributed deep Q learning-based edge computing and communication algorithm is proposed to find the optimal solution. In particular, we design two different neural network structures and each communication link can converge to the optimal operation by exploiting only the local information from its neighbors, making it scalable to large networks. Finally, experimental results demonstrate the effectiveness of the proposed solution in various AAAN scenarios.","PeriodicalId":6779,"journal":{"name":"2021 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"20 1","pages":"333-338"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82557992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emerging applications such as autonomous drones and massively multiplayer gaming require real-time communication between multiple geo-distributed participating entities. A publish-subscribe system deployed on a geo-distributed edge infrastructure would provide a scalable messaging middleware for such applications. However state-of-the-art publish-subscribe systems like Apache Pulsar and Kafka perform inefficiently in a geo-distributed deployment due to heterogeneous client-broker latencies and constant client mobility. We present a novel control-plane architecture for geo-distributed publish-subscribe systems that is capable of adaptive topic partitioning to enable low-latency messaging for such applications. We leverage a peer-to-peer network coordinate protocol for scalable estimation of network latencies between publish-subscribe brokers and clients. Client-broker latency and workload metrics are continuously collected from brokers and used to detect latency violations or workload imbalance, which triggers reassignment of topics. We develop ePulsar, which incorporates the control-plane architecture ideas into the popular Apache Pulsar publish-subscribe system, retaining Pulsar's data-plane APIs. We evaluate the efficacy and overheads of the proposed control plane using workload scenarios representative of typical edge-centric applications on an emulated geo-distributed infrastructure.
{"title":"ePulsar: Control Plane for Publish-Subscribe Systems on Geo-Distributed Edge Infrastructure","authors":"Harshit Gupta, Tyler C. Landle, U. Ramachandran","doi":"10.1145/3453142.3491271","DOIUrl":"https://doi.org/10.1145/3453142.3491271","url":null,"abstract":"Emerging applications such as autonomous drones and massively multiplayer gaming require real-time communication between multiple geo-distributed participating entities. A publish-subscribe system deployed on a geo-distributed edge infrastructure would provide a scalable messaging middleware for such applications. However state-of-the-art publish-subscribe systems like Apache Pulsar and Kafka perform inefficiently in a geo-distributed deployment due to heterogeneous client-broker latencies and constant client mobility. We present a novel control-plane architecture for geo-distributed publish-subscribe systems that is capable of adaptive topic partitioning to enable low-latency messaging for such applications. We leverage a peer-to-peer network coordinate protocol for scalable estimation of network latencies between publish-subscribe brokers and clients. Client-broker latency and workload metrics are continuously collected from brokers and used to detect latency violations or workload imbalance, which triggers reassignment of topics. We develop ePulsar, which incorporates the control-plane architecture ideas into the popular Apache Pulsar publish-subscribe system, retaining Pulsar's data-plane APIs. We evaluate the efficacy and overheads of the proposed control plane using workload scenarios representative of typical edge-centric applications on an emulated geo-distributed infrastructure.","PeriodicalId":6779,"journal":{"name":"2021 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"41 1","pages":"228-241"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75717514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Denzler, Siegfried Hollerer, Thomas Frühwirth, W. Kastner
Edge computing provides the means for higher integration and seamless communication in industrial automation. Due to the paradigm's distributed nature, it faces several security threats and safety hazards. This paper presents an adjusted method of combing STRIDE-LM and HAZOP to identify security threats, safety hazards, and interdependencies suitable for edge computing. The method allows a bi-directional identification of m: $n$ interdependencies between threats and hazards. The paper concludes by outlining further research, including identifying possible failure chains, later risk analysis, evaluation, and treatment.
{"title":"Identification of security threats, safety hazards, and interdependencies in industrial edge computing","authors":"P. Denzler, Siegfried Hollerer, Thomas Frühwirth, W. Kastner","doi":"10.1145/3453142.3493508","DOIUrl":"https://doi.org/10.1145/3453142.3493508","url":null,"abstract":"Edge computing provides the means for higher integration and seamless communication in industrial automation. Due to the paradigm's distributed nature, it faces several security threats and safety hazards. This paper presents an adjusted method of combing STRIDE-LM and HAZOP to identify security threats, safety hazards, and interdependencies suitable for edge computing. The method allows a bi-directional identification of m: $n$ interdependencies between threats and hazards. The paper concludes by outlining further research, including identifying possible failure chains, later risk analysis, evaluation, and treatment.","PeriodicalId":6779,"journal":{"name":"2021 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"358 1","pages":"397-402"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76356899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Babak Badnava, Taejoon Kim, Kenny Cheung, Zaheer Ali, M. Hashemi
We consider the problem of task offloading by unmanned aerial vehicles (UAV) using mobile edge computing (MEC). In this context, each UAV makes a decision to offload the computation task to a more powerful MEC server (e.g., base station), or to perform the task locally. In this paper, we propose a spectrum-aware decision-making framework such that each agent can dynamically select one of the available channels for offloading. To this end, we develop a deep reinforcement learning (DRL) framework for the UAVs to select the channel for task offloading or perform the computation locally. In the numerical results based on deep Q-network, we con-sider a combination of energy consumption and task completion time as the reward. Simulation results based on low-band, mid-band, and high-band channels demonstrate that the DQN agents efficiently learn the environment and dynamically adjust their actions to maximize the long-term reward.
{"title":"Spectrum-Aware Mobile Edge Computing for UAVs Using Reinforcement Learning","authors":"Babak Badnava, Taejoon Kim, Kenny Cheung, Zaheer Ali, M. Hashemi","doi":"10.1145/3453142.3491414","DOIUrl":"https://doi.org/10.1145/3453142.3491414","url":null,"abstract":"We consider the problem of task offloading by unmanned aerial vehicles (UAV) using mobile edge computing (MEC). In this context, each UAV makes a decision to offload the computation task to a more powerful MEC server (e.g., base station), or to perform the task locally. In this paper, we propose a spectrum-aware decision-making framework such that each agent can dynamically select one of the available channels for offloading. To this end, we develop a deep reinforcement learning (DRL) framework for the UAVs to select the channel for task offloading or perform the computation locally. In the numerical results based on deep Q-network, we con-sider a combination of energy consumption and task completion time as the reward. Simulation results based on low-band, mid-band, and high-band channels demonstrate that the DQN agents efficiently learn the environment and dynamically adjust their actions to maximize the long-term reward.","PeriodicalId":6779,"journal":{"name":"2021 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"66 1","pages":"376-380"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77017453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modern vehicles are becoming more advanced recently by incorporating new functionalities, such as V2X, more connectivity and autonomous driving. However, these new things also open the vehicle wider to the outside and thus pose more severe threats to the vehicle security and safety. In this paper, we propose CANGuard, a vehicle intrusion detection system that learns in-vehicle traffic patterns and uses the patterns to detect anomaly in a vehicle network. CANGuard applies autoencoder, an unsupervised learning technique, on the raw CAN messages to learn efficient models of these data, and requires no expert to label CAN messages as needed in supervised approaches. Unlike another study that also uses unsupervised learning but can only detect attacks involving one single type of message, CANGuard can detect attacks involving multiple types of messages as well. Experiments with public data sets demonstrate that CANGuard has almost the same, at some case better, results as compared with state-of-art supervised approaches. Combined with its unsupervised nature and its capability to detect attacks involving multiple types of message, this proves CANGuard is more practical to be deployed in modern vehicle environments.
{"title":"CANGuard: Practical Intrusion Detection for In-Vehicle Network via Unsupervised Learning","authors":"Wu Zhou, Hao-ming Fu, Shray Kapoor","doi":"10.1145/3453142.3493514","DOIUrl":"https://doi.org/10.1145/3453142.3493514","url":null,"abstract":"Modern vehicles are becoming more advanced recently by incorporating new functionalities, such as V2X, more connectivity and autonomous driving. However, these new things also open the vehicle wider to the outside and thus pose more severe threats to the vehicle security and safety. In this paper, we propose CANGuard, a vehicle intrusion detection system that learns in-vehicle traffic patterns and uses the patterns to detect anomaly in a vehicle network. CANGuard applies autoencoder, an unsupervised learning technique, on the raw CAN messages to learn efficient models of these data, and requires no expert to label CAN messages as needed in supervised approaches. Unlike another study that also uses unsupervised learning but can only detect attacks involving one single type of message, CANGuard can detect attacks involving multiple types of messages as well. Experiments with public data sets demonstrate that CANGuard has almost the same, at some case better, results as compared with state-of-art supervised approaches. Combined with its unsupervised nature and its capability to detect attacks involving multiple types of message, this proves CANGuard is more practical to be deployed in modern vehicle environments.","PeriodicalId":6779,"journal":{"name":"2021 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"170 1","pages":"454-458"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79367089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}