Pub Date : 2020-11-01DOI: 10.1109/SEC50012.2020.00040
N. Sreekumar, A. Chandra, J. Weissman
Edge environments are generating an increasingly large amount of data due to the proliferation of edge devices. Accommodating this large influx of data at edge servers is a challenging issue. While some data can be processed as it is generated, others must be stored for later access. This paper proposes the features that a new edge-native storage system must possess including support for user mobility and node fluctuation. To motivate this, we first describe several emerging edge applications and their data needs. We then describe the challenges in meeting these needs. We then evaluate an out-of-the-box cloud storage system, Cassandra, to assess it’s suitability as an edge storage system due to many edge-friendly features. We determined that while a cloud-based storage system can be ported to the edge meeting some of the challenges, other challenges require new solutions. Based on the challenges and the results of Cassandra case study, we propose a set of design principles for a new edge-native storage system.
{"title":"Position Paper: Towards a Robust Edge-Native Storage System","authors":"N. Sreekumar, A. Chandra, J. Weissman","doi":"10.1109/SEC50012.2020.00040","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00040","url":null,"abstract":"Edge environments are generating an increasingly large amount of data due to the proliferation of edge devices. Accommodating this large influx of data at edge servers is a challenging issue. While some data can be processed as it is generated, others must be stored for later access. This paper proposes the features that a new edge-native storage system must possess including support for user mobility and node fluctuation. To motivate this, we first describe several emerging edge applications and their data needs. We then describe the challenges in meeting these needs. We then evaluate an out-of-the-box cloud storage system, Cassandra, to assess it’s suitability as an edge storage system due to many edge-friendly features. We determined that while a cloud-based storage system can be ported to the edge meeting some of the challenges, other challenges require new solutions. Based on the challenges and the results of Cassandra case study, we propose a set of design principles for a new edge-native storage system.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114141753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/SEC50012.2020.00023
Yuanli Wang, Dhruv Kumar, A. Chandra
Federated Learning [1] enables distributed devices to learn a shared machine learning model together, without uploading their private training data. It has received significant attention recently and has been used in mobile applications such as search suggestion [2] and object detection [3]. Federated Learning is different from distributed machine learning due to the following reasons: 1) System heterogeneity: federated learning is usually performed on devices having highly dynamic and heterogeneous network, compute, and power availability. 2) Data heterogeneity (or statistical heterogeneity): data is produced by different users on different devices, and therefore may have different statistical distribution (non-IID).
{"title":"Poster: Exploiting Data Heterogeneity for Performance and Reliability in Federated Learning","authors":"Yuanli Wang, Dhruv Kumar, A. Chandra","doi":"10.1109/SEC50012.2020.00023","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00023","url":null,"abstract":"Federated Learning [1] enables distributed devices to learn a shared machine learning model together, without uploading their private training data. It has received significant attention recently and has been used in mobile applications such as search suggestion [2] and object detection [3]. Federated Learning is different from distributed machine learning due to the following reasons: 1) System heterogeneity: federated learning is usually performed on devices having highly dynamic and heterogeneous network, compute, and power availability. 2) Data heterogeneity (or statistical heterogeneity): data is produced by different users on different devices, and therefore may have different statistical distribution (non-IID).","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"47 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124178905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/SEC50012.2020.00008
Wenzhao Zhang, Yuxuan Zhang, Hongchang Fan, Yi Gao, Wei Dong, Jinfeng Wang
Customizing and deploying an edge system is a time-consuming and complex task, considering the hardware heterogeneity, third-party software compatibility, diverse performance requirements, etc. In this paper, we present TinyEdge, a holistic system for the rapid customization of edge systems. The key idea of TinyEdge is to use a top-down approach for designing the software and estimating the performance of the customized edge systems under different hardware specifications. Developers select and conFigure modules to specify the critical logic of their interactions, without dealing with the specific hardware or software. Taking the configuration as input, TinyEdge automatically generates the deployment package and estimate the performance after sufficient profiling. TinyEdge provides a unified customization framework for modules to specify their dependencies, functionalities, interactions, and configurations. We implement TinyEdge and evaluate its performance using real-world edge systems. Results show that: 1) TinyEdge achieves rapid customization of edge systems, reducing 44.15% of customization time and 67.79% lines of code on average compared with the state-of-the-art edge platforms; 2) TinyEdge builds compact modules and optimizes the latent circular dependency detection and message queuing efficiency; 3) TinyEdge performance estimation has low average absolute error in various settings.
{"title":"TinyEdge: Enabling Rapid Edge System Customization for IoT Applications","authors":"Wenzhao Zhang, Yuxuan Zhang, Hongchang Fan, Yi Gao, Wei Dong, Jinfeng Wang","doi":"10.1109/SEC50012.2020.00008","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00008","url":null,"abstract":"Customizing and deploying an edge system is a time-consuming and complex task, considering the hardware heterogeneity, third-party software compatibility, diverse performance requirements, etc. In this paper, we present TinyEdge, a holistic system for the rapid customization of edge systems. The key idea of TinyEdge is to use a top-down approach for designing the software and estimating the performance of the customized edge systems under different hardware specifications. Developers select and conFigure modules to specify the critical logic of their interactions, without dealing with the specific hardware or software. Taking the configuration as input, TinyEdge automatically generates the deployment package and estimate the performance after sufficient profiling. TinyEdge provides a unified customization framework for modules to specify their dependencies, functionalities, interactions, and configurations. We implement TinyEdge and evaluate its performance using real-world edge systems. Results show that: 1) TinyEdge achieves rapid customization of edge systems, reducing 44.15% of customization time and 67.79% lines of code on average compared with the state-of-the-art edge platforms; 2) TinyEdge builds compact modules and optimizes the latent circular dependency detection and message queuing efficiency; 3) TinyEdge performance estimation has low average absolute error in various settings.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126431826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/SEC50012.2020.00061
Daniel Happ
Publish/subscribe systems and in particular the MQTT protocol are often used to cope with the messaging needs of modern Internet of Things (IoT) systems, i.e. between the IoT devices and the related services. Their benefits lie in their decoupling properties, easing the seemless movement of the systems components across the increasingly distributed hardware substrate of cloud, fog, and edge. However, publish/subscribe systems alone are rarely suitable for all usecases commonly associated with the IoT. One important example is the lack of persistent storage of sensor data in popular pub/sub system, such as MQTT, which often leads to the anti-pattern of constantly subscribing and locally storing the data at the subscriber side, resulting in multiple independent copies accross the network. In this paper, we present our analysis of integrating storage technologies into pub/sub: We show how append-only logs, considered to be the prevailing storage paradigm in some IoT focused systems, can be added to common pub/sub systems. Three options are outlined in detail regarding features. We further show on an abstract level how the glue code combining the solutions might look like.
{"title":"On Combining Publish/Subscribe with Append-only Logs for IoT Data","authors":"Daniel Happ","doi":"10.1109/SEC50012.2020.00061","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00061","url":null,"abstract":"Publish/subscribe systems and in particular the MQTT protocol are often used to cope with the messaging needs of modern Internet of Things (IoT) systems, i.e. between the IoT devices and the related services. Their benefits lie in their decoupling properties, easing the seemless movement of the systems components across the increasingly distributed hardware substrate of cloud, fog, and edge. However, publish/subscribe systems alone are rarely suitable for all usecases commonly associated with the IoT. One important example is the lack of persistent storage of sensor data in popular pub/sub system, such as MQTT, which often leads to the anti-pattern of constantly subscribing and locally storing the data at the subscriber side, resulting in multiple independent copies accross the network. In this paper, we present our analysis of integrating storage technologies into pub/sub: We show how append-only logs, considered to be the prevailing storage paradigm in some IoT focused systems, can be added to common pub/sub systems. Three options are outlined in detail regarding features. We further show on an abstract level how the glue code combining the solutions might look like.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134173080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/SEC50012.2020.00028
Yifan Gong, Lu Wang, Wei Liu, Jiangming Jin
Benefiting from the major breakthrough of AI technology and increasing application of AIoT (AI + IoT), the intelligent robot system achieves high developing speed in recent years. As a core component of the intelligent system, service discovery plays a crucial role in system reliability. Because of the high CPU usages in conventional service discovery, a light weight service discovery mechanism is required in such a resourcelimited robot system. To improve the efficiency of CPU usages, we propose a weak centralized mechanism and a socket based notification mechanism, to reduce the amount of event in service discovery. The evaluation results show that our proposed light weight service discovery mechanism can reduce 95% CPU usages in average, compared with the conventional service discovery used in ROS2, which is an industry standard in robot systems.
{"title":"Poster: A Light Weight Service Discovery Mechanism in Robot Systems","authors":"Yifan Gong, Lu Wang, Wei Liu, Jiangming Jin","doi":"10.1109/SEC50012.2020.00028","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00028","url":null,"abstract":"Benefiting from the major breakthrough of AI technology and increasing application of AIoT (AI + IoT), the intelligent robot system achieves high developing speed in recent years. As a core component of the intelligent system, service discovery plays a crucial role in system reliability. Because of the high CPU usages in conventional service discovery, a light weight service discovery mechanism is required in such a resourcelimited robot system. To improve the efficiency of CPU usages, we propose a weak centralized mechanism and a socket based notification mechanism, to reduce the amount of event in service discovery. The evaluation results show that our proposed light weight service discovery mechanism can reduce 95% CPU usages in average, compared with the conventional service discovery used in ROS2, which is an industry standard in robot systems.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133528363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/SEC50012.2020.00053
Tongyu Wang, Hao Han, Zijie Wang
With the availability of mobile devices display and camera, Screen-Camera links for Visible Lite Communications (VLC) has attracted much more attention due to its convenience, infrastructure-free and security. In conventional Screen-camera link, the senders encode data bits into a barcode stream and receivers capture the barcode scream and decode the barcodes. But conventional Screen-camera communication systems all suffer from both CMOS rolling shutter and inter frame mixing problem when display rate is close to camera capture rate and this leads to a high block transfer error rate. In this paper, we propose a system called FareQR by adding an outline border to the barcode stream to help the receiver detect mixed frames and de-obfuscation each mixed frames into perfect QRCodes. We formulate the mixed frame de-obfuscation problem as a hard-decision-decoding task, and propose a Viterbi algorithm to resolve each block in the mixed areas. We test the FareQR and result demonstrate that our work can recover the mixed area and reduce the block transfer error rate.
随着移动设备显示器和摄像头的普及,VLC (Screen-Camera links for Visible Lite Communications)因其便捷性、无基础设施性和安全性而受到越来越多的关注。在传统的屏幕摄像机链路中,发送者将数据位编码成条形码流,接收器捕获条形码尖叫并解码条形码。但是,传统的屏摄通信系统在显示速率接近相机捕获速率时,都存在CMOS滚动快门和帧间混合问题,从而导致高块传输错误率。在本文中,我们提出了一种称为FareQR的系统,通过在条形码流中添加轮廓边界来帮助接收端检测混合帧并将每个混合帧去混淆为完美的QRCodes。我们将混合帧去混淆问题描述为一个硬决策解码任务,并提出了一种Viterbi算法来解决混合区域中的每个块。对FareQR进行了测试,结果表明我们的工作可以恢复混合区域,降低块传输错误率。
{"title":"FareQR: Fast and Reliable Screen-Camera Transfer System for Mobile Devices using QR Code","authors":"Tongyu Wang, Hao Han, Zijie Wang","doi":"10.1109/SEC50012.2020.00053","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00053","url":null,"abstract":"With the availability of mobile devices display and camera, Screen-Camera links for Visible Lite Communications (VLC) has attracted much more attention due to its convenience, infrastructure-free and security. In conventional Screen-camera link, the senders encode data bits into a barcode stream and receivers capture the barcode scream and decode the barcodes. But conventional Screen-camera communication systems all suffer from both CMOS rolling shutter and inter frame mixing problem when display rate is close to camera capture rate and this leads to a high block transfer error rate. In this paper, we propose a system called FareQR by adding an outline border to the barcode stream to help the receiver detect mixed frames and de-obfuscation each mixed frames into perfect QRCodes. We formulate the mixed frame de-obfuscation problem as a hard-decision-decoding task, and propose a Viterbi algorithm to resolve each block in the mixed areas. We test the FareQR and result demonstrate that our work can recover the mixed area and reduce the block transfer error rate.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116012817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/SEC50012.2020.00065
Xiang Sun, Rana Albelaihi, Z. Akhavan
In this paper, we propose to cache popular Internet of Things (IoT) resources in the brokers (which can be considered as application layer middlewares) by applying the CoAP Publish/Subscribe protocol in order to reduce the energy consumption of the servers (e.g., IoT devices), which host these resources. If an IoT resource is cached in a broker, all the requests to retrieve the content of the IoT resource will be delivered to the broker, which responses to the requests by sending related contents, thus increasing the power consumption of the broker. In order to reduce the operational expenditure of the broker provider, each broker is powered by green energy and uses on-grid energy as a backup. On-gird energy consumption of the brokers may be different. That is, some brokers with low green energy generation and more cached IoT resources may consume more on-grid energy consumption than brokers with high green energy generation and less cached IoT resources. In order to minimize the total on-grid energy consumption of the brokers, the Green Energy Aware Resource caching (GEAR) algorithm is proposed to balance energy demands by re-allocating/re-caching the popular IoT resources among the brokers. The performance of GEAR is validated via simulations.
{"title":"Caching IoT Resources in Green Brokers at the Application Layer","authors":"Xiang Sun, Rana Albelaihi, Z. Akhavan","doi":"10.1109/SEC50012.2020.00065","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00065","url":null,"abstract":"In this paper, we propose to cache popular Internet of Things (IoT) resources in the brokers (which can be considered as application layer middlewares) by applying the CoAP Publish/Subscribe protocol in order to reduce the energy consumption of the servers (e.g., IoT devices), which host these resources. If an IoT resource is cached in a broker, all the requests to retrieve the content of the IoT resource will be delivered to the broker, which responses to the requests by sending related contents, thus increasing the power consumption of the broker. In order to reduce the operational expenditure of the broker provider, each broker is powered by green energy and uses on-grid energy as a backup. On-gird energy consumption of the brokers may be different. That is, some brokers with low green energy generation and more cached IoT resources may consume more on-grid energy consumption than brokers with high green energy generation and less cached IoT resources. In order to minimize the total on-grid energy consumption of the brokers, the Green Energy Aware Resource caching (GEAR) algorithm is proposed to balance energy demands by re-allocating/re-caching the popular IoT resources among the brokers. The performance of GEAR is validated via simulations.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128030230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/SEC50012.2020.00017
Sidi Lu, Xin Yuan, Weisong Shi
Machine vision is the key to the successful deployment of many Advanced Driver Assistant System (ADAS) / Automated Driving System (ADS) functions, which require accurate high-resolution video processing in a real-time manner. Conventional approaches are either to reduce the frame rate or reduce the related frame size of the conventional camera videos, which lead to undesired consequences such as losing informative high-speed information and/or small objects in the video frames.Unlike conventional cameras, Compressive Imaging (CI) cameras are the promising implications of Compressive Sensing, which is an emerging field with the revelation that the optical domain compressed signal (a small number of linear projections of the original video image data) contains sufficient high-speed information for reconstruction and processing. Yet, CI cameras usually need complicated algorithms to retrieve the desired signal, leading to the corresponding high energy consumption. In this paper, we take a step further to the real applications of CI cameras in connected and autonomous vehicles (CAVs), with the primary goal of accelerating accurate video analysis and decreasing energy consumption. We propose a novel Vehicle Edge Server-Cloud closed-loop framework called Edge Compression for CI processing on CAVs. Our comprehensive experiments with four public datasets demonstrate that the detection accuracy of the compressed video images (named measurements) generated by the CI camera is close to the accuracy on reconstructed videos and comparable to the true value, which paves the way of applying CI in CAVs. Finally, six important observations with supporting evidence and analysis are presented to provide practical implications for researchers and domain experts. The code to reproduce our results is available at https://www.thecarlab.oryoutcomes/software.
{"title":"Edge Compression: An Integrated Framework for Compressive Imaging Processing on CAVs","authors":"Sidi Lu, Xin Yuan, Weisong Shi","doi":"10.1109/SEC50012.2020.00017","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00017","url":null,"abstract":"Machine vision is the key to the successful deployment of many Advanced Driver Assistant System (ADAS) / Automated Driving System (ADS) functions, which require accurate high-resolution video processing in a real-time manner. Conventional approaches are either to reduce the frame rate or reduce the related frame size of the conventional camera videos, which lead to undesired consequences such as losing informative high-speed information and/or small objects in the video frames.Unlike conventional cameras, Compressive Imaging (CI) cameras are the promising implications of Compressive Sensing, which is an emerging field with the revelation that the optical domain compressed signal (a small number of linear projections of the original video image data) contains sufficient high-speed information for reconstruction and processing. Yet, CI cameras usually need complicated algorithms to retrieve the desired signal, leading to the corresponding high energy consumption. In this paper, we take a step further to the real applications of CI cameras in connected and autonomous vehicles (CAVs), with the primary goal of accelerating accurate video analysis and decreasing energy consumption. We propose a novel Vehicle Edge Server-Cloud closed-loop framework called Edge Compression for CI processing on CAVs. Our comprehensive experiments with four public datasets demonstrate that the detection accuracy of the compressed video images (named measurements) generated by the CI camera is close to the accuracy on reconstructed videos and comparable to the true value, which paves the way of applying CI in CAVs. Finally, six important observations with supporting evidence and analysis are presented to provide practical implications for researchers and domain experts. The code to reproduce our results is available at https://www.thecarlab.oryoutcomes/software.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121085239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/SEC50012.2020.00056
Samira Taghavi, Weisong Shi
Preserving privacy in image and video data captured from public environments is essential for any research group that leverages, publishes, or shares such data. Although there are several research efforts attempting to resolve the privacy issues, they had quality and efficiency limitations. In this work, we proposed EdgeMask as a privacy preserving service that leverages edge computing and deep learning models to propose a real-time object segmentation approach and analyze the input data using parallel computing and speed up the object removal. Our experimental results indicate that EdgeMask reduces the computational time considerably.
{"title":"EdgeMask: An Edge-based Privacy Preserving Service for Video Data Sharing","authors":"Samira Taghavi, Weisong Shi","doi":"10.1109/SEC50012.2020.00056","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00056","url":null,"abstract":"Preserving privacy in image and video data captured from public environments is essential for any research group that leverages, publishes, or shares such data. Although there are several research efforts attempting to resolve the privacy issues, they had quality and efficiency limitations. In this work, we proposed EdgeMask as a privacy preserving service that leverages edge computing and deep learning models to propose a real-time object segmentation approach and analyze the input data using parallel computing and speed up the object removal. Our experimental results indicate that EdgeMask reduces the computational time considerably.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123464432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to perform deep learning tasks everywhere, many optimizations have been proposed to address the resource limitations on mobile systems like IoTs. A key approach among others is to dynamically adjust computational resources of the deep learning inference according to the characteristics of incoming inputs. For example, one of popular optimizations is to pick for each input a suitable combination of computations with respect to its inference difficulty. However, we find out that such “dynamic routing” of computations could be exploited to drain/waste precious resources on mobile deep learning systems. In this work, we introduce a new deep learning attack dimension, the computational resources draining, and demonstrate its feasibility in one of possible attack manners, the adversarial examples of input data. We describe how to construct our special adversarial examples aiming to the resource draining, and show that these poisoned inputs are able to increase the computation loads on purpose with two experiment datasets. We hope that our findings can shed light on the path of improving the robustness of mobile deep learning optimizations.
{"title":"Exploiting Adversarial Examples to Drain Computational Resources on Mobile Deep Learning Systems","authors":"Han Gao, Yulong Tian, Rongchun Yao, Fengyuan Xu, Xinyi Fu, Sheng Zhong","doi":"10.1109/SEC50012.2020.00048","DOIUrl":"https://doi.org/10.1109/SEC50012.2020.00048","url":null,"abstract":"In order to perform deep learning tasks everywhere, many optimizations have been proposed to address the resource limitations on mobile systems like IoTs. A key approach among others is to dynamically adjust computational resources of the deep learning inference according to the characteristics of incoming inputs. For example, one of popular optimizations is to pick for each input a suitable combination of computations with respect to its inference difficulty. However, we find out that such “dynamic routing” of computations could be exploited to drain/waste precious resources on mobile deep learning systems. In this work, we introduce a new deep learning attack dimension, the computational resources draining, and demonstrate its feasibility in one of possible attack manners, the adversarial examples of input data. We describe how to construct our special adversarial examples aiming to the resource draining, and show that these poisoned inputs are able to increase the computation loads on purpose with two experiment datasets. We hope that our findings can shed light on the path of improving the robustness of mobile deep learning optimizations.","PeriodicalId":375577,"journal":{"name":"2020 IEEE/ACM Symposium on Edge Computing (SEC)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116971431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}