Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00068
Yongrae Jo, Jeonghyun Ma, Chanik Park
Many Blockchain-as-a-Service (BaaS) providers have emerged with the growing interest in BaaS among enterprises. However, current BaaS providers can pose a potential security threat in the context of a centralized service provider and for clients that depend on the provider. In this study, we first consider the problem of auditing BaaS and develop an Enforcer architecture for trustworthy BaaS.
{"title":"Toward Trustworthy Blockchain-as-a-Service with Auditing","authors":"Yongrae Jo, Jeonghyun Ma, Chanik Park","doi":"10.1109/ICDCS47774.2020.00068","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00068","url":null,"abstract":"Many Blockchain-as-a-Service (BaaS) providers have emerged with the growing interest in BaaS among enterprises. However, current BaaS providers can pose a potential security threat in the context of a centralized service provider and for clients that depend on the provider. In this study, we first consider the problem of auditing BaaS and develop an Enforcer architecture for trustworthy BaaS.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130987680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00190
Meiyan Lu, L. Liao, Feng Zhang, Dandan Song
Knowledge graph is popular in knowledge mining fields. TransE uses the structure information of triples $left( {overrightarrow {{e_h}} + overrightarrow {{e_r}} approx overrightarrow {{e_t}} } right)$ to embed knowledge graphs into a continuous vector space, which is a very important component in knowledge representations. However, current TransE models are only implemented on single-node machines. With the explosive growth of data volumes, single-node TransE cannot meet the demand for data processing of large knowledge graphs, so a distributed TransE is urgently needed. In this poster, we propose a distributed TransE written in MPI, which can run on HPC clusters. In our experiments, our distributed TransE exhibits high-performance speedup and accuracy.
{"title":"Exploration of TransE in a Distributed Environment","authors":"Meiyan Lu, L. Liao, Feng Zhang, Dandan Song","doi":"10.1109/ICDCS47774.2020.00190","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00190","url":null,"abstract":"Knowledge graph is popular in knowledge mining fields. TransE uses the structure information of triples $left( {overrightarrow {{e_h}} + overrightarrow {{e_r}} approx overrightarrow {{e_t}} } right)$ to embed knowledge graphs into a continuous vector space, which is a very important component in knowledge representations. However, current TransE models are only implemented on single-node machines. With the explosive growth of data volumes, single-node TransE cannot meet the demand for data processing of large knowledge graphs, so a distributed TransE is urgently needed. In this poster, we propose a distributed TransE written in MPI, which can run on HPC clusters. In our experiments, our distributed TransE exhibits high-performance speedup and accuracy.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"228 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124290980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00128
Ying Wan, Haoyu Song, Hao Che, Yang Xu, Yi Wang, Chuwen Zhang, Zhijun Wang, Tian Pan, Hao Li, Hong Jiang, Chengchen Hu, Zhikang Chen, Bin Liu
While widely used for flow tables in SDN switches, TCAM faces challenges for rule updates. Both the computation time and interrupt time need to be short. We propose FastUp, a new TCAM update algorithm, which improves the previous dynamic programming-based algorithms. Evaluations show that FastUp shortens the computation time by 40~100× and the interrupt time by 1.2~2.5×. In addition, we are the first to prove the NP-hardness of the optimal TCAM update problem, and provide a practical method to evaluate an algorithm’s degree of optimality. Experiments show that FastUp’s optimality reaches 90%.
{"title":"FastUp: Compute a Better TCAM Update Scheme in Less Time for SDN Switches","authors":"Ying Wan, Haoyu Song, Hao Che, Yang Xu, Yi Wang, Chuwen Zhang, Zhijun Wang, Tian Pan, Hao Li, Hong Jiang, Chengchen Hu, Zhikang Chen, Bin Liu","doi":"10.1109/ICDCS47774.2020.00128","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00128","url":null,"abstract":"While widely used for flow tables in SDN switches, TCAM faces challenges for rule updates. Both the computation time and interrupt time need to be short. We propose FastUp, a new TCAM update algorithm, which improves the previous dynamic programming-based algorithms. Evaluations show that FastUp shortens the computation time by 40~100× and the interrupt time by 1.2~2.5×. In addition, we are the first to prove the NP-hardness of the optimal TCAM update problem, and provide a practical method to evaluate an algorithm’s degree of optimality. Experiments show that FastUp’s optimality reaches 90%.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"182 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124586252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00044
Shujie Han, P. Lee, Zhirong Shen, Cheng He, Yi Liu, Tao Huang
We explore machine learning for accurately predicting imminent disk failures and hence providing proactive fault tolerance for modern storage systems. Current disk failure prediction approaches are mostly offline and assume that the disk logs required for training learning models are available a priori. However, in large-scale disk deployment, disk logs are often continuously generated as an evolving data stream, in which the statistical patterns vary over time (also known as concept drift). Such a challenge motivates the need of online techniques that perform training and prediction on the incoming stream of disk logs in real time, while being adaptive to concept drift.We present StreamDFP, a general stream mining framework for disk failure prediction with concept-drift adaptation. We start with a measurement study and demonstrate the existence of concept drift on various disk models based on the datasets from Backblaze and Alibaba Cloud. Motivated by our study, we design StreamDFP with three key techniques, namely (i) online labeling, (ii) concept-drift-aware training, and (iii) general prediction, with a primary objective of making StreamDFP support various machine learning algorithms as a general frame-work. Our evaluation shows that StreamDFP improves the prediction accuracy significantly compared to without concept-drift adaptation under various settings, and achieves reasonably high stream processing performance.
{"title":"Toward Adaptive Disk Failure Prediction via Stream Mining","authors":"Shujie Han, P. Lee, Zhirong Shen, Cheng He, Yi Liu, Tao Huang","doi":"10.1109/ICDCS47774.2020.00044","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00044","url":null,"abstract":"We explore machine learning for accurately predicting imminent disk failures and hence providing proactive fault tolerance for modern storage systems. Current disk failure prediction approaches are mostly offline and assume that the disk logs required for training learning models are available a priori. However, in large-scale disk deployment, disk logs are often continuously generated as an evolving data stream, in which the statistical patterns vary over time (also known as concept drift). Such a challenge motivates the need of online techniques that perform training and prediction on the incoming stream of disk logs in real time, while being adaptive to concept drift.We present StreamDFP, a general stream mining framework for disk failure prediction with concept-drift adaptation. We start with a measurement study and demonstrate the existence of concept drift on various disk models based on the datasets from Backblaze and Alibaba Cloud. Motivated by our study, we design StreamDFP with three key techniques, namely (i) online labeling, (ii) concept-drift-aware training, and (iii) general prediction, with a primary objective of making StreamDFP support various machine learning algorithms as a general frame-work. Our evaluation shows that StreamDFP improves the prediction accuracy significantly compared to without concept-drift adaptation under various settings, and achieves reasonably high stream processing performance.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124021121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00176
Yunhao Sun, Guan-yu Li, B. Ning
Complex Event Process (CEP) is very essential in Semantic Web of Things (SWoT) that deploy a large number of sensor devices, like smart traffic and smart city. CEP mainly solves heterogenous problems of stream data processing, where streaming data is connected to internet by a mass of wireless sensor devices. The core work of CEP is rule updating. Existing researches of rule updating are designed for static environments, and it is quite laborious to transplant those rules for dynamic environments. To enhance the portability of event rules, a method of automatic rule updating based on machine learning is proposed to learn the rules of a dynamic environment. Experimental results reveal that the proposed methods are effective and efficient.
{"title":"Automatic Rule Updating based on Machine Learning in Complex Event Processing","authors":"Yunhao Sun, Guan-yu Li, B. Ning","doi":"10.1109/ICDCS47774.2020.00176","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00176","url":null,"abstract":"Complex Event Process (CEP) is very essential in Semantic Web of Things (SWoT) that deploy a large number of sensor devices, like smart traffic and smart city. CEP mainly solves heterogenous problems of stream data processing, where streaming data is connected to internet by a mass of wireless sensor devices. The core work of CEP is rule updating. Existing researches of rule updating are designed for static environments, and it is quite laborious to transplant those rules for dynamic environments. To enhance the portability of event rules, a method of automatic rule updating based on machine learning is proposed to learn the rules of a dynamic environment. Experimental results reveal that the proposed methods are effective and efficient.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129716451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00071
Fei Li, Youyou Lu, Zhe Yang, J. Shu
Secondary indexing is highly demanded for key-value stores by many applications to accelerate query performance. Current secondary indices on key-value stores are typically built on top of the primary index. In a secondary key query, the primary index has to be accessed to fetch the records, with the retrieved primary keys from the secondary index. The record fetching process invokes lots of point lookups in the primary index and exacerbates the read amplification. In this paper, we present SineKV, a decoupled Secondary indexing Key-Value store, aiming to avoid fetching records from the primary index and improve the secondary key query performance. Firstly, SineKV separates the records from the indices and keeps each index pointing to the record values independently. Secondly, SineKV proposes a mapping-based lazy index maintenance strategy to ensure the consistency of secondary indices. Finally, SineKV leverages the CMB feature of the underlying NVMe SSDs to guarantee crash consistency. We implement and evaluate SineKV against LevelDB and Wisc-Key based designs. The evaluations show SineKV outperforms LevelDB and WiscKey based systems by up to 6.12× and 2.78× under microbenchmark and mixed workloads.
{"title":"SineKV: Decoupled Secondary Indexing for LSM-based Key-Value Stores","authors":"Fei Li, Youyou Lu, Zhe Yang, J. Shu","doi":"10.1109/ICDCS47774.2020.00071","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00071","url":null,"abstract":"Secondary indexing is highly demanded for key-value stores by many applications to accelerate query performance. Current secondary indices on key-value stores are typically built on top of the primary index. In a secondary key query, the primary index has to be accessed to fetch the records, with the retrieved primary keys from the secondary index. The record fetching process invokes lots of point lookups in the primary index and exacerbates the read amplification. In this paper, we present SineKV, a decoupled Secondary indexing Key-Value store, aiming to avoid fetching records from the primary index and improve the secondary key query performance. Firstly, SineKV separates the records from the indices and keeps each index pointing to the record values independently. Secondly, SineKV proposes a mapping-based lazy index maintenance strategy to ensure the consistency of secondary indices. Finally, SineKV leverages the CMB feature of the underlying NVMe SSDs to guarantee crash consistency. We implement and evaluate SineKV against LevelDB and Wisc-Key based designs. The evaluations show SineKV outperforms LevelDB and WiscKey based systems by up to 6.12× and 2.78× under microbenchmark and mixed workloads.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129659595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00081
Ehsan Arabnezhad, Massimo La Morgia, A. Mei, E. Nemmi, Julinda Stefa
Most users have several Internet names. On Face-book or LinkedIn, for example, people usually appear with the real one. On other standard websites, like forums, people often use aliases to protect their real identities with respect to the other users, with no real privacy against the web site and the authorities. Aliases in the Dark Web are different: users expect strong identity protection.In this paper, we show that using both "open" aliases (aliases used in the standard Web) and Dark Web aliases can be dangerous per se. Indeed, we develop tools to link Dark Web to open aliases. For the first time, we perform a massive scale experiment on real scenarios. First between two Dark Web forums, then between the Dark Web forums and the standard forums. Due to a large number of possible pairs, we first reduce the search space cutting down the number of potential matches to a small set of candidates, and then on the selection of the correct alias among these candidates. We show that our methodology has excellent precision, from 87% to 94%, and recall around 80%.
{"title":"A Light in the Dark Web: Linking Dark Web Aliases to Real Internet Identities","authors":"Ehsan Arabnezhad, Massimo La Morgia, A. Mei, E. Nemmi, Julinda Stefa","doi":"10.1109/ICDCS47774.2020.00081","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00081","url":null,"abstract":"Most users have several Internet names. On Face-book or LinkedIn, for example, people usually appear with the real one. On other standard websites, like forums, people often use aliases to protect their real identities with respect to the other users, with no real privacy against the web site and the authorities. Aliases in the Dark Web are different: users expect strong identity protection.In this paper, we show that using both \"open\" aliases (aliases used in the standard Web) and Dark Web aliases can be dangerous per se. Indeed, we develop tools to link Dark Web to open aliases. For the first time, we perform a massive scale experiment on real scenarios. First between two Dark Web forums, then between the Dark Web forums and the standard forums. Due to a large number of possible pairs, we first reduce the search space cutting down the number of potential matches to a small set of candidates, and then on the selection of the correct alias among these candidates. We show that our methodology has excellent precision, from 87% to 94%, and recall around 80%.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128976656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00177
A. Harwood, M. Read, Gayashan Amarasinghe
The performance of a distributed stream processing engine is traditionally considered in terms of fundamental measurements of latency and throughput. Recently, Apache Storm has demonstrated sub-millisecond latencies for inter-component tuple transmission, though it does so through aggressive throttling that leads to strict throughput limitations in order to keep tuple queues near empty. On the other hand, Apache Heron has excellent throughput characteristics, especially when operating near unstable conditions, but its inter-component latencies typically start around 10 milliseconds. Both of these systems require roughly 650MB of installation space. We have developed Dragon, loosely based on the same API as Storm and Heron, that is both lightweight, requiring just 7.5MB of installation space, and competitive in performance to Storm and Heron. In this paper we show experiments with all three systems using the Word Count benchmark. Dragon achieves throughput characteristics near to that of Heron and inter-component latencies less than 10ms under high load. In particular, Dragon’s maximum latency is significantly less that Storm’s maximum latency under high load. Finally Dragon managed to remain stable at higher effective throughput than Heron. We believe Dragon is a good "allrounder" solution and is particularly suitable for Edge computing applications, given its small installation footprint.
{"title":"Dragon: A Lightweight, High Performance Distributed Stream Processing Engine","authors":"A. Harwood, M. Read, Gayashan Amarasinghe","doi":"10.1109/ICDCS47774.2020.00177","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00177","url":null,"abstract":"The performance of a distributed stream processing engine is traditionally considered in terms of fundamental measurements of latency and throughput. Recently, Apache Storm has demonstrated sub-millisecond latencies for inter-component tuple transmission, though it does so through aggressive throttling that leads to strict throughput limitations in order to keep tuple queues near empty. On the other hand, Apache Heron has excellent throughput characteristics, especially when operating near unstable conditions, but its inter-component latencies typically start around 10 milliseconds. Both of these systems require roughly 650MB of installation space. We have developed Dragon, loosely based on the same API as Storm and Heron, that is both lightweight, requiring just 7.5MB of installation space, and competitive in performance to Storm and Heron. In this paper we show experiments with all three systems using the Word Count benchmark. Dragon achieves throughput characteristics near to that of Heron and inter-component latencies less than 10ms under high load. In particular, Dragon’s maximum latency is significantly less that Storm’s maximum latency under high load. Finally Dragon managed to remain stable at higher effective throughput than Heron. We believe Dragon is a good \"allrounder\" solution and is particularly suitable for Edge computing applications, given its small installation footprint.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127835314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00099
Jingling Liu, Jiawei Huang, Ning Jiang, Weihe Li, Jianxin Wang
Modern data centers often host multiple applications with diverse network demands. To provide fair bandwidth allocation to several thousand traversing flows, Approximate Fair Queueing (AFQ) utilizes multiple priority queues in switch to approximate ideal fair queueing. However, due to limited number of queues in commodity switches, AFQ easily experiences high packet loss and low link utilization. In this paper, we propose Elastic Fair Queueing (EFQ), which leverages limited priority queues to flexibly achieve both high network utilization and fair bandwidth allocation. EFQ dynamically assigns the free buffer space in priority queues for each packet to obtain high utilization without sacrificing flow-level fairness. The results of simulation experiments and real implementations show that EFQ reduces the average flow completion time by up to 82% over the state-of-the-art fair bandwidth allocation mechanisms.
{"title":"Achieving High Utilization for Approximate Fair Queueing in Data Center","authors":"Jingling Liu, Jiawei Huang, Ning Jiang, Weihe Li, Jianxin Wang","doi":"10.1109/ICDCS47774.2020.00099","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00099","url":null,"abstract":"Modern data centers often host multiple applications with diverse network demands. To provide fair bandwidth allocation to several thousand traversing flows, Approximate Fair Queueing (AFQ) utilizes multiple priority queues in switch to approximate ideal fair queueing. However, due to limited number of queues in commodity switches, AFQ easily experiences high packet loss and low link utilization. In this paper, we propose Elastic Fair Queueing (EFQ), which leverages limited priority queues to flexibly achieve both high network utilization and fair bandwidth allocation. EFQ dynamically assigns the free buffer space in priority queues for each packet to obtain high utilization without sacrificing flow-level fairness. The results of simulation experiments and real implementations show that EFQ reduces the average flow completion time by up to 82% over the state-of-the-art fair bandwidth allocation mechanisms.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128996277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00038
Borui Li, Wei Dong
IoT application development usually involves separate programming at the device side and server side. While separate programming style is sufficient for many simple applications, it is not suitable for many complex applications that involve complex interactions and intensive data processing. We propose EdgeProg, an edge-centric programming approach to simplify IoT application programming, motivated by the increasing popularity of edge computing. With EdgeProg, users could write application logic in a centralized manner with an augmented If-This-Then-That (IFTTT) syntax and virtual sensor mechanism. The program can be processed at the edge server, which can automatically generate the actual application code and intelligently partition the code into device code and server code, for achieving the optimal latency. EdgeProg employs dynamic linking and loading to deploy the device code on a variety of IoT devices, which do not run any application-specific codes at the start. Results show that EdgeProg achieves an average reduction of 20.96% and 79.41% in terms of execution latency and lines of code, compared with state-of-the-art approaches.
{"title":"EdgeProg: Edge-centric Programming for IoT Applications","authors":"Borui Li, Wei Dong","doi":"10.1109/ICDCS47774.2020.00038","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00038","url":null,"abstract":"IoT application development usually involves separate programming at the device side and server side. While separate programming style is sufficient for many simple applications, it is not suitable for many complex applications that involve complex interactions and intensive data processing. We propose EdgeProg, an edge-centric programming approach to simplify IoT application programming, motivated by the increasing popularity of edge computing. With EdgeProg, users could write application logic in a centralized manner with an augmented If-This-Then-That (IFTTT) syntax and virtual sensor mechanism. The program can be processed at the edge server, which can automatically generate the actual application code and intelligently partition the code into device code and server code, for achieving the optimal latency. EdgeProg employs dynamic linking and loading to deploy the device code on a variety of IoT devices, which do not run any application-specific codes at the start. Results show that EdgeProg achieves an average reduction of 20.96% and 79.41% in terms of execution latency and lines of code, compared with state-of-the-art approaches.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121366679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}