Pub Date : 2020-11-01DOI: 10.1109/SEC50012.2020.00045
Zhuwei Qin, Hao Jiang
Intelligent mobile applications have been ubiquitous on mobile devices. These applications keep collecting new and sensitive data from different users while being expected to have the ability to continually adapt the embedded machine learning model to these newly collected data. To improve the quality of service while protecting users’ privacy, distributed mobile learning (e.g., Federated Learning (FedAvg) [1]) has been proposed to offload model training from the cloud to the mobile devices, which enables multiple devices collaboratively train a shared model without leaking the data to the cloud. However, this design becomes impracticable when training the machine learning model (e.g., Convolutional Neural Network (CNN)) on mobile devices with diverse application context. For example, in conventional distributed training schemes, different devices are assumed to have integrated training datasets and train identical CNN model structures. Distributed collaboration between devices is implemented by a straightforward weight average of each identical local models. While, in mobile image classification tasks, different mobile applications have dedicated classification targets depending on individual users’ preference and application specificity. Therefore, directly averaging the model weight of each local model will result in a significant reduction of the test accuracy. To solve this problem, we proposed CAD: a context-aware distributed learning framework for mobile applications, where each mobile device is deployed with a context-adaptive submodel structure instead of the entire global model structure.
{"title":"Towards Context-aware Distributed Learning for CNN in Mobile Applications","authors":"Zhuwei Qin, Hao Jiang","doi":"10.1109/SEC50012.2020.00045","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00045","url":null,"abstract":"Intelligent mobile applications have been ubiquitous on mobile devices. These applications keep collecting new and sensitive data from different users while being expected to have the ability to continually adapt the embedded machine learning model to these newly collected data. To improve the quality of service while protecting users’ privacy, distributed mobile learning (e.g., Federated Learning (FedAvg) [1]) has been proposed to offload model training from the cloud to the mobile devices, which enables multiple devices collaboratively train a shared model without leaking the data to the cloud. However, this design becomes impracticable when training the machine learning model (e.g., Convolutional Neural Network (CNN)) on mobile devices with diverse application context. For example, in conventional distributed training schemes, different devices are assumed to have integrated training datasets and train identical CNN model structures. Distributed collaboration between devices is implemented by a straightforward weight average of each identical local models. While, in mobile image classification tasks, different mobile applications have dedicated classification targets depending on individual users’ preference and application specificity. Therefore, directly averaging the model weight of each local model will result in a significant reduction of the test accuracy. To solve this problem, we proposed CAD: a context-aware distributed learning framework for mobile applications, where each mobile device is deployed with a context-adaptive submodel structure instead of the entire global model structure.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127692853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/SEC50012.2020.00024
João Alexandre Neto, Jorge C. B. Fonseca, Kiev Gama
Edge computing has enabled the usage of Complex Event Processing (CEP) closer to data sources, delivering on time response to critical applications. One of the challenges in this context is how to support this processing and keep an optimal resource usage (e.g., Memory, CPU). State-of-art solutions have suggested computational offloading techniques to distribute processing across the nodes and reach such optimization. Most of them take the offloading decision through predefined policies or adaptive solutions with the usage of machine learning algorithms. However, these techniques are not able to incrementally learn without any historical data or to adapt to changes on statistical data properties. This research aims to use online learning and concept drift detection on offloading decision to optimize resource usage and keep the learning model up-to-date. The feasibility of our approach was noticed through preliminary evaluations.
{"title":"Towards Online Learning and Concept Drift for Offloading Complex Event Processing in the Edge","authors":"João Alexandre Neto, Jorge C. B. Fonseca, Kiev Gama","doi":"10.1109/SEC50012.2020.00024","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00024","url":null,"abstract":"Edge computing has enabled the usage of Complex Event Processing (CEP) closer to data sources, delivering on time response to critical applications. One of the challenges in this context is how to support this processing and keep an optimal resource usage (e.g., Memory, CPU). State-of-art solutions have suggested computational offloading techniques to distribute processing across the nodes and reach such optimization. Most of them take the offloading decision through predefined policies or adaptive solutions with the usage of machine learning algorithms. However, these techniques are not able to incrementally learn without any historical data or to adapt to changes on statistical data properties. This research aims to use online learning and concept drift detection on offloading decision to optimize resource usage and keep the learning model up-to-date. The feasibility of our approach was noticed through preliminary evaluations.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"36 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132866924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/SEC50012.2020.00030
Martin Wagner, Julien Gedeon, Karolis Skaisgiris, Florian Brandherm, M. Mühlhäuser
We introduce an assessment framework for edge computing applications. The framework allows developers to measure the execution time of their applications in different environments and generate a model for the prediction of execution times. Based on these measurements and predictions, better informed management decisions can be made for edge applications.
{"title":"Poster: An Assessment Framework for Edge Applications","authors":"Martin Wagner, Julien Gedeon, Karolis Skaisgiris, Florian Brandherm, M. Mühlhäuser","doi":"10.1109/SEC50012.2020.00030","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00030","url":null,"abstract":"We introduce an assessment framework for edge computing applications. The framework allows developers to measure the execution time of their applications in different environments and generate a model for the prediction of execution times. Based on these measurements and predictions, better informed management decisions can be made for edge applications.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131447307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/SEC50012.2020.00010
Kaustabha Ray, A. Banerjee, N. Narendra
In recent times, Mobile Edge Computing (MEC) has emerged as a new paradigm allowing low-latency access to services deployed on edge nodes offering computation, storage and communication facilities. Vendors deploy their services on MEC servers to improve performance and mitigate network latencies often encountered in accessing cloud services. A service placement policy determines which services are deployed on which MEC servers. A number of mechanisms exist in literature to determine the optimal placement of services considering different performance metrics. However, for applications designed as microservice workflow architectures, service placement schemes need to be re-examined through a different lens owing to the inherent interdependencies which exist between microservices. Indeed, the dynamic environment, with stochastic user movement and service invocations, along with a large placement configuration space makes microservice placement in MEC a challenging task. Additionally, owing to user mobility, a placement scheme may need to be recalibrated, triggering service migrations to maintain the advantages offered by MEC. Existing microservice placement and migration schemes consider on-demand strategies. In this work, we take a different route and propose a Reinforcement Learning based proactive mechanism for microservice placement and migration. We use the San Francisco Taxi dataset to validate our approach. Experimental results show the effectiveness of our approach in comparison to other state-of-the-art methods.
{"title":"Proactive Microservice Placement and Migration for Mobile Edge Computing","authors":"Kaustabha Ray, A. Banerjee, N. Narendra","doi":"10.1109/SEC50012.2020.00010","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00010","url":null,"abstract":"In recent times, Mobile Edge Computing (MEC) has emerged as a new paradigm allowing low-latency access to services deployed on edge nodes offering computation, storage and communication facilities. Vendors deploy their services on MEC servers to improve performance and mitigate network latencies often encountered in accessing cloud services. A service placement policy determines which services are deployed on which MEC servers. A number of mechanisms exist in literature to determine the optimal placement of services considering different performance metrics. However, for applications designed as microservice workflow architectures, service placement schemes need to be re-examined through a different lens owing to the inherent interdependencies which exist between microservices. Indeed, the dynamic environment, with stochastic user movement and service invocations, along with a large placement configuration space makes microservice placement in MEC a challenging task. Additionally, owing to user mobility, a placement scheme may need to be recalibrated, triggering service migrations to maintain the advantages offered by MEC. Existing microservice placement and migration schemes consider on-demand strategies. In this work, we take a different route and propose a Reinforcement Learning based proactive mechanism for microservice placement and migration. We use the San Francisco Taxi dataset to validate our approach. Experimental results show the effectiveness of our approach in comparison to other state-of-the-art methods.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129970244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/SEC50012.2020.00054
Chanying Huang, Yingxun Hu
Edge computing is emerging as an innovative technology that brings data processing and storage to end users, further leading to scale, decentralization and safety to IoT netWorks. It improves the quality of services but meanwhile introduces great challenges such as data security, latency, etc. Fortunately, blockchain technology can improve security issues of edge computing in IoT networks as it alloWs only trusted IoT nodes to interact With each other. To better address the security risks of data sharing problem and improve the credibility of data, this paper presents a Blockchain and Edge computing Assistant security FrameWork (BEAF) With data sharing for IoT netWorks. With blockchain and edge computing, BEAF supports both decentralization and data tracing after data sharing. In addition, BEAF can achieve access control and share data to specific node. We also conduct security analysis and shoiv that BEAF provides confidentiality, availability, data integrity, etc. In addition, We develop the base layer through Hyperledger Fabric technology, and evaluate the performance of BEAF in terms of stability, scalability and efficiency.
{"title":"BEAF: A Blockchain and Edge Assistant Framework with Data Sharing for IoT Networks","authors":"Chanying Huang, Yingxun Hu","doi":"10.1109/SEC50012.2020.00054","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00054","url":null,"abstract":"Edge computing is emerging as an innovative technology that brings data processing and storage to end users, further leading to scale, decentralization and safety to IoT netWorks. It improves the quality of services but meanwhile introduces great challenges such as data security, latency, etc. Fortunately, blockchain technology can improve security issues of edge computing in IoT networks as it alloWs only trusted IoT nodes to interact With each other. To better address the security risks of data sharing problem and improve the credibility of data, this paper presents a Blockchain and Edge computing Assistant security FrameWork (BEAF) With data sharing for IoT netWorks. With blockchain and edge computing, BEAF supports both decentralization and data tracing after data sharing. In addition, BEAF can achieve access control and share data to specific node. We also conduct security analysis and shoiv that BEAF provides confidentiality, availability, data integrity, etc. In addition, We develop the base layer through Hyperledger Fabric technology, and evaluate the performance of BEAF in terms of stability, scalability and efficiency.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"9 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125064615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/SEC50012.2020.00068
Shiya Liu, Lingjia Liu, Y. Yi
With the advance of edge computing, a fast and efficient machine learning model running on edge devices is needed. In this paper, we propose a novel quantization approach that reduces the memory and compute demands on edge devices without losing much accuracy. Also, we explore its application in communication such as symbol detection in 5G systems, attack detection of smart grid, and dynamic spectrum access. Conventional neural networks such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs) could be exploited on these applications and achieve state-of-the-art performance. However, conventional neural networks consume a large amount of computation and storage resources, and thus do not fit well to edge devices. Reservoir computing (RC), which is a framework for computation derived from RNN, consists of a fixed reservoir layer and a trained readout layer. The advantages of RC compared to traditional RNNs are faster learning and lower training costs. Besides, RC has faster inference speed with fewer parameters and resistance to overfitting issues. These merits make the RC system more suitable for applications running on edge devices. We apply the proposed quantization approach to RC systems and demonstrate the proposed quantized RC system on Xilinx Zynq®-7000 FPGA board. On the sequential MNIST dataset, the quantized RC system utilizes 62%, 65%, and 64% less of DSP, FF, and LUT, respectively compared to the floating-point RNN. The inference speed is improved by 17 times with an 8% accuracy drop.
{"title":"Quantized Reservoir Computing on Edge Devices for Communication Applications","authors":"Shiya Liu, Lingjia Liu, Y. Yi","doi":"10.1109/SEC50012.2020.00068","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00068","url":null,"abstract":"With the advance of edge computing, a fast and efficient machine learning model running on edge devices is needed. In this paper, we propose a novel quantization approach that reduces the memory and compute demands on edge devices without losing much accuracy. Also, we explore its application in communication such as symbol detection in 5G systems, attack detection of smart grid, and dynamic spectrum access. Conventional neural networks such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs) could be exploited on these applications and achieve state-of-the-art performance. However, conventional neural networks consume a large amount of computation and storage resources, and thus do not fit well to edge devices. Reservoir computing (RC), which is a framework for computation derived from RNN, consists of a fixed reservoir layer and a trained readout layer. The advantages of RC compared to traditional RNNs are faster learning and lower training costs. Besides, RC has faster inference speed with fewer parameters and resistance to overfitting issues. These merits make the RC system more suitable for applications running on edge devices. We apply the proposed quantization approach to RC systems and demonstrate the proposed quantized RC system on Xilinx Zynq®-7000 FPGA board. On the sequential MNIST dataset, the quantized RC system utilizes 62%, 65%, and 64% less of DSP, FF, and LUT, respectively compared to the floating-point RNN. The inference speed is improved by 17 times with an 8% accuracy drop.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128055593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/SEC50012.2020.00049
Dai Hou, Hao Han, Ed Novak
In the current state of communication technology, the abuse of VoIP has led to the emergence of telecommunications fraud. We urgently need an end-to-end identity authentication mechanism to verify the identity of the caller. This paper proposes an end-to-end, dual identity authentication mechanism to solve the problem of telecommunications fraud. Our first technique is to use the Hermes algorithm of data transmission technology on an unknown voice channel to transmit the certificate, thereby authenticating the caller’s phone number. Our second technique uses voice-print recognition technology and a Gaussian mixture model (a general background probabilistic model) to establish a model of the speaker to verify the caller’s voice to ensure the speaker’s identity. Our solution is implemented on the Android platform, and simultaneously tests and evaluates transmission efficiency and speaker recognition. Experiments conducted on Android phones show that the error rate of the voice channel transmission signature certificate is within 3.247 %, and the certificate signature verification mechanism is feasible. The accuracy of the voice-print recognition is 72%, making it effective as a reference for identity authentication.
{"title":"TAES: Two-factor Authentication with End-to-End Security against VoIP Phishing","authors":"Dai Hou, Hao Han, Ed Novak","doi":"10.1109/SEC50012.2020.00049","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00049","url":null,"abstract":"In the current state of communication technology, the abuse of VoIP has led to the emergence of telecommunications fraud. We urgently need an end-to-end identity authentication mechanism to verify the identity of the caller. This paper proposes an end-to-end, dual identity authentication mechanism to solve the problem of telecommunications fraud. Our first technique is to use the Hermes algorithm of data transmission technology on an unknown voice channel to transmit the certificate, thereby authenticating the caller’s phone number. Our second technique uses voice-print recognition technology and a Gaussian mixture model (a general background probabilistic model) to establish a model of the speaker to verify the caller’s voice to ensure the speaker’s identity. Our solution is implemented on the Android platform, and simultaneously tests and evaluates transmission efficiency and speaker recognition. Experiments conducted on Android phones show that the error rate of the voice channel transmission signature certificate is within 3.247 %, and the certificate signature verification mechanism is feasible. The accuracy of the voice-print recognition is 72%, making it effective as a reference for identity authentication.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124867545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/SEC50012.2020.00042
A. Alzadjali, Flavio Esposito, J. Deogun
Multi-homed mobile devices are capable of aggregating traffic transmissions over heterogeneous networks. MultiPath TCP (MPTCP) is an evolution of TCP that allows the simultaneous use of multiple interfaces for a single connection. Despite the success of MPTCP, its deployment can be enhanced by controlling which network interface to be used as an initial path during the connectivity setup. In this paper, we proposed an online MPTCP path manager based on the contextual bandit algorithm to help choose the optimal primary path connection that maximizes throughput and minimizes delay and packet loss. The contextual bandit path manager deals with the rapid changes of multiple transmission paths in heterogeneous networks. The output of this algorithm introduces an adaptive policy to the path manager whenever the MPTCP connection is attempted based on the last hop wireless signals characteristics. Our experiments run over a real dataset of WiFi/LTE networks using NS3 implementation of MPTCP, enhanced to better support MPTCP path management control. We analyzed MPTCP’s throughput and latency metrics in various network conditions and found that the performance of the contextual bandit MPTCP path manager improved compared to the baselines used in our evaluation experiments. Utilizing edge computing technology, this model can be implemented in a mobile edge computing server to dodge MPTCP path management issues by communicating to the mobile equipment the best path for the given radio conditions. Our evaluation demonstrates that leveraging adaptive contextawareness improves the utilization of multiple network interfaces.
{"title":"A Contextual Bi-armed Bandit Approach for MPTCP Path Management in Heterogeneous LTE and WiFi Edge Networks","authors":"A. Alzadjali, Flavio Esposito, J. Deogun","doi":"10.1109/SEC50012.2020.00042","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00042","url":null,"abstract":"Multi-homed mobile devices are capable of aggregating traffic transmissions over heterogeneous networks. MultiPath TCP (MPTCP) is an evolution of TCP that allows the simultaneous use of multiple interfaces for a single connection. Despite the success of MPTCP, its deployment can be enhanced by controlling which network interface to be used as an initial path during the connectivity setup. In this paper, we proposed an online MPTCP path manager based on the contextual bandit algorithm to help choose the optimal primary path connection that maximizes throughput and minimizes delay and packet loss. The contextual bandit path manager deals with the rapid changes of multiple transmission paths in heterogeneous networks. The output of this algorithm introduces an adaptive policy to the path manager whenever the MPTCP connection is attempted based on the last hop wireless signals characteristics. Our experiments run over a real dataset of WiFi/LTE networks using NS3 implementation of MPTCP, enhanced to better support MPTCP path management control. We analyzed MPTCP’s throughput and latency metrics in various network conditions and found that the performance of the contextual bandit MPTCP path manager improved compared to the baselines used in our evaluation experiments. Utilizing edge computing technology, this model can be implemented in a mobile edge computing server to dodge MPTCP path management issues by communicating to the mobile equipment the best path for the given radio conditions. Our evaluation demonstrates that leveraging adaptive contextawareness improves the utilization of multiple network interfaces.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124922854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/SEC50012.2020.00011
Moysis Symeonides, Z. Georgiou, Demetris Trihinas, G. Pallis, M. Dikaiakos
Fog Computing is emerging as the dominating paradigm bridging the compute and connectivity gap between sensing devices and latency-sensitive services. However, experimenting and evaluating IoT services is a daunting task involving the manual configuration and deployment of a mixture of geodistributed physical and virtual infrastructure with different resource and network requirements. This results in sub-optimal, costly and error-prone deployments due to numerous unexpected overheads not initially envisioned in the design phase and underwhelming testing conditions not resembling the end environment. In this paper, we introduce Fogify, an emulator easing the modeling, deployment and large-scale experimentation of fog and edge testbeds. Fogify provides a toolset to: (i) model complex fog topologies comprised of heterogeneous resources, network capabilities and QoS criteria; (ii) deploy the modelled configuration and services using popular containerized descriptions to a cloud or local environment; (iii) experiment, measure and evaluate the deployment by injecting faults and adapting the configuration at runtime to test different “what-if” scenarios that reveal the limitations of a service before introduced to the public. In the evaluation, proof-of-concept IoT services with real-world workloads are introduced to show the wide applicability and benefits of rapid prototyping via Fogify.
{"title":"Fogify: A Fog Computing Emulation Framework","authors":"Moysis Symeonides, Z. Georgiou, Demetris Trihinas, G. Pallis, M. Dikaiakos","doi":"10.1109/SEC50012.2020.00011","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00011","url":null,"abstract":"Fog Computing is emerging as the dominating paradigm bridging the compute and connectivity gap between sensing devices and latency-sensitive services. However, experimenting and evaluating IoT services is a daunting task involving the manual configuration and deployment of a mixture of geodistributed physical and virtual infrastructure with different resource and network requirements. This results in sub-optimal, costly and error-prone deployments due to numerous unexpected overheads not initially envisioned in the design phase and underwhelming testing conditions not resembling the end environment. In this paper, we introduce Fogify, an emulator easing the modeling, deployment and large-scale experimentation of fog and edge testbeds. Fogify provides a toolset to: (i) model complex fog topologies comprised of heterogeneous resources, network capabilities and QoS criteria; (ii) deploy the modelled configuration and services using popular containerized descriptions to a cloud or local environment; (iii) experiment, measure and evaluate the deployment by injecting faults and adapting the configuration at runtime to test different “what-if” scenarios that reveal the limitations of a service before introduced to the public. In the evaluation, proof-of-concept IoT services with real-world workloads are introduced to show the wide applicability and benefits of rapid prototyping via Fogify.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125400186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/SEC50012.2020.00062
Letian Zhang, Jie Xu
There is a growing interest in developing deep learning methods to solve many resource management problems in wireless edge computing systems where model-based designs are infeasible. While deep learning is known to be vulnerable to adversarial example attacks, the security risk of learningbased designs in the context of edge computing is not well understood. In this paper, we propose and study a new adversarial example attack, called stealthy interference attack (SIA), in deep reinforcement learning (DRL)-based edge computation offloading systems. In SIA, the attacker exerts a carefully determined level of interference signal to change the input states of the DRL-based policy, thereby fooling the mobile device in selecting a target and compromised edge server for computation offloading while evading detection. Simulation results demonstrate the effectiveness of SIA, and show that our algorithm outperforms existing adversarial machine learning algorithms in terms of a higher attack success probability and a lower power consumption.
{"title":"Fooling Edge Computation Offloading via Stealthy Interference Attack","authors":"Letian Zhang, Jie Xu","doi":"10.1109/SEC50012.2020.00062","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00062","url":null,"abstract":"There is a growing interest in developing deep learning methods to solve many resource management problems in wireless edge computing systems where model-based designs are infeasible. While deep learning is known to be vulnerable to adversarial example attacks, the security risk of learningbased designs in the context of edge computing is not well understood. In this paper, we propose and study a new adversarial example attack, called stealthy interference attack (SIA), in deep reinforcement learning (DRL)-based edge computation offloading systems. In SIA, the attacker exerts a carefully determined level of interference signal to change the input states of the DRL-based policy, thereby fooling the mobile device in selecting a target and compromised edge server for computation offloading while evading detection. Simulation results demonstrate the effectiveness of SIA, and show that our algorithm outperforms existing adversarial machine learning algorithms in terms of a higher attack success probability and a lower power consumption.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128956395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}