Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00064
Qiaofeng Qin, Konstantinos Poularakis, L. Tassiulas
Security threats arising in massively connected Internet of Things (IoT) devices have attracted wide attention. It is necessary to equip IoT gateways with firewalls to prevent hacked devices from infecting a larger amount of network nodes. The match-and-action mechanism of Software Defined Networking (SDN) provides the means to differentiate malicious traffic flows from normal ones, which mirrors the past firewall mechanisms but with a new flexible and dynamically reconfigurable twist. However, vulnerabilities of IoT devices and heterogeneous protocols coexisting in the same network challenge the extension of SDN into the IoT domain. To overcome these challenges, we leverage the high level of data plane programmability brought by the P4 language and design a novel two-stage deep learning method for attack detection tailored to that particular language. Our method is able to generate flow rules that match a small number of header fields from arbitrary protocols while maintaining high performance of attack detection. Evaluations using network traces of different IoT protocols show significant benefits in accuracy, efficiency and universality over state-of-the-art methods.
{"title":"A Learning Approach with Programmable Data Plane towards IoT Security","authors":"Qiaofeng Qin, Konstantinos Poularakis, L. Tassiulas","doi":"10.1109/ICDCS47774.2020.00064","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00064","url":null,"abstract":"Security threats arising in massively connected Internet of Things (IoT) devices have attracted wide attention. It is necessary to equip IoT gateways with firewalls to prevent hacked devices from infecting a larger amount of network nodes. The match-and-action mechanism of Software Defined Networking (SDN) provides the means to differentiate malicious traffic flows from normal ones, which mirrors the past firewall mechanisms but with a new flexible and dynamically reconfigurable twist. However, vulnerabilities of IoT devices and heterogeneous protocols coexisting in the same network challenge the extension of SDN into the IoT domain. To overcome these challenges, we leverage the high level of data plane programmability brought by the P4 language and design a novel two-stage deep learning method for attack detection tailored to that particular language. Our method is able to generate flow rules that match a small number of header fields from arbitrary protocols while maintaining high performance of attack detection. Evaluations using network traces of different IoT protocols show significant benefits in accuracy, efficiency and universality over state-of-the-art methods.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117338706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00162
Zhipeng Gao, Honglin Li, Kaile Xiao, Qian Wang
As things currently stand, the blockchain industry is siloed among many different platforms and protocols resulting in various islands of blockchains. Restrictions regarding assets transfers and data migration between different blockchains reduce the usability and comfort of users, and hinder novel developments within the blockchain ecosystem. Interoperability will be the main topics of next-generation blockchain technologies. In this paper, we focus on how to enable interoperability between two heterogeneous blockchains in the context of data migration. We first build an cross-chain data migration architecture based on data migration oracle. Second, we design a data migration mechanism based on former architecture. By employing the proposed data migration architecture, it is equivalent to opening a secure channel between two heterogeneous blockchains allowing secure data migration. By applying data migration mechanism, the confidentiality, integrity and security of migrated data can be well guaranteed.
{"title":"Cross-chain Oracle Based Data Migration Mechanism in Heterogeneous Blockchains","authors":"Zhipeng Gao, Honglin Li, Kaile Xiao, Qian Wang","doi":"10.1109/ICDCS47774.2020.00162","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00162","url":null,"abstract":"As things currently stand, the blockchain industry is siloed among many different platforms and protocols resulting in various islands of blockchains. Restrictions regarding assets transfers and data migration between different blockchains reduce the usability and comfort of users, and hinder novel developments within the blockchain ecosystem. Interoperability will be the main topics of next-generation blockchain technologies. In this paper, we focus on how to enable interoperability between two heterogeneous blockchains in the context of data migration. We first build an cross-chain data migration architecture based on data migration oracle. Second, we design a data migration mechanism based on former architecture. By employing the proposed data migration architecture, it is equivalent to opening a secure channel between two heterogeneous blockchains allowing secure data migration. By applying data migration mechanism, the confidentiality, integrity and security of migrated data can be well guaranteed.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124242032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00132
Qihua Zhou, Song Guo, Peng Li, Yanfei Sun, Li Li, M. Guo, Kun Wang
As to address the impact of heterogeneity in distributed Deep Learning (DL) systems, most previous approaches focus on prioritizing the contribution of fast workers and reducing the involvement of slow workers, incurring the limitations of workload imbalance and computation inefficiency. We reveal that grouping workers into communities, an abstraction proposed by us, and handling parameter synchronization in community level can conquer these limitations and accelerate the training convergence progress. The inspiration of community comes from our exploration of prior knowledge about the similarity between workers, which is often neglected by previous work. These observations motivate us to propose a new synchronization mechanism named Community-aware Synchronous Parallel (CSP), which uses the Asynchronous Advantage Actor-Critic (A3C), a Reinforcement Learning (RL) based algorithm, to intelligently determine community configuration and fully improve the synchronization performance. The whole idea has been implemented in a system called Petrel that achieves a good balance between convergence efficiency and communication overhead. The evaluation under different benchmarks demonstrates our approach can effectively accelerate the training convergence speed and reduce synchro-nization traffic.
{"title":"Petrel: Community-aware Synchronous Parallel for Heterogeneous Parameter Server","authors":"Qihua Zhou, Song Guo, Peng Li, Yanfei Sun, Li Li, M. Guo, Kun Wang","doi":"10.1109/ICDCS47774.2020.00132","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00132","url":null,"abstract":"As to address the impact of heterogeneity in distributed Deep Learning (DL) systems, most previous approaches focus on prioritizing the contribution of fast workers and reducing the involvement of slow workers, incurring the limitations of workload imbalance and computation inefficiency. We reveal that grouping workers into communities, an abstraction proposed by us, and handling parameter synchronization in community level can conquer these limitations and accelerate the training convergence progress. The inspiration of community comes from our exploration of prior knowledge about the similarity between workers, which is often neglected by previous work. These observations motivate us to propose a new synchronization mechanism named Community-aware Synchronous Parallel (CSP), which uses the Asynchronous Advantage Actor-Critic (A3C), a Reinforcement Learning (RL) based algorithm, to intelligently determine community configuration and fully improve the synchronization performance. The whole idea has been implemented in a system called Petrel that achieves a good balance between convergence efficiency and communication overhead. The evaluation under different benchmarks demonstrates our approach can effectively accelerate the training convergence speed and reduce synchro-nization traffic.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116589282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00020
Anand Mudgerikar, E. Bertino
The deployment of Internet of Things (IoT) combined with cyber-physical systems is resulting in complex environments comprising of various devices interacting with each other and with users through apps running on computing platforms like mobile phones, tablets, and desktops. In addition, the rapid advances in Artificial Intelligence are making those devices able to autonomously modify their behaviors through the use of techniques such as reinforcement learning (RL). It is clear however that ensuring safety and security in such environments is critical. In this paper, we introduce Jarvis, a constrained RL framework for IoT environments that determines optimal devices actions with respect to user-defined goals, such as energy optimization, while at the same time ensuring safety and security. Jarvis is scalable and context independent in that it is applicable to any IoT environment with minimum human effort. We instantiate Jarvis for a smart home environment and evaluate its performance using both simulated and real world data. In terms of safety and security, Jarvis is able to detect 100% of the 214 manually crafted security violations collected from prior work and is able to correctly filter 99.2% of the user-defined benign anomalies and malfunctions from safety violations. For measuring functionality benefits, Jarvis is evaluated using real world smart home datasets with respect to three user required functionalities: energy use minimization, energy cost minimization, and temperature optimization. Our analysis shows that Jarvis provides significant advantages over normal device behavior in terms of functionality and over general unconstrained RL frameworks in terms of safety and security.
{"title":"Jarvis: Moving Towards a Smarter Internet of Things","authors":"Anand Mudgerikar, E. Bertino","doi":"10.1109/ICDCS47774.2020.00020","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00020","url":null,"abstract":"The deployment of Internet of Things (IoT) combined with cyber-physical systems is resulting in complex environments comprising of various devices interacting with each other and with users through apps running on computing platforms like mobile phones, tablets, and desktops. In addition, the rapid advances in Artificial Intelligence are making those devices able to autonomously modify their behaviors through the use of techniques such as reinforcement learning (RL). It is clear however that ensuring safety and security in such environments is critical. In this paper, we introduce Jarvis, a constrained RL framework for IoT environments that determines optimal devices actions with respect to user-defined goals, such as energy optimization, while at the same time ensuring safety and security. Jarvis is scalable and context independent in that it is applicable to any IoT environment with minimum human effort. We instantiate Jarvis for a smart home environment and evaluate its performance using both simulated and real world data. In terms of safety and security, Jarvis is able to detect 100% of the 214 manually crafted security violations collected from prior work and is able to correctly filter 99.2% of the user-defined benign anomalies and malfunctions from safety violations. For measuring functionality benefits, Jarvis is evaluated using real world smart home datasets with respect to three user required functionalities: energy use minimization, energy cost minimization, and temperature optimization. Our analysis shows that Jarvis provides significant advantages over normal device behavior in terms of functionality and over general unconstrained RL frameworks in terms of safety and security.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115201728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00058
Weiqi Cui, Tao Chen, Chan-Tin Eric
Website fingerprinting (WF) allows a passive local eavesdropper to monitor the encrypted channel where users search the Internet and determine which website the user is visiting from the recorded traffic. The effectiveness of using deep learning (DL) in WF attacks has been explored in recent work. However, they all are built and evaluated on one-page traces. Our goal is to explore whether deep learning can be used to handle the situations when the captured traces are not best-case for an adversary, such as partial traces and two-page traces. We aim to reduce the distance between the lab experiments and the realistic conditions. We evaluate our proposed method in both closed-world and open-world settings and found that Convolutional Neural Network (CNN) outperforms Long-Short Term Memory network (LSTM) in all scenarios. CNN also shows a great potential in predicting on a smaller number of packets. For partial trace missing 20% packets in the beginning of the trace, the accuracy is improved from 8.28% to 86.93% compared to the original DL model by adding the head detection. We then show the accuracy of predicting on two-page traces. With an overlap of 80% between two websites, we are able to achieve an accuracy of 89.25% and 74.2% for the first and second website in the closed-world evaluation, and 95.5% and 75% in the open world from our simulation. To verify our simulation results, we set up a crawler to collect both training and testing data and gathered the largest two-page traces testing dataset ever used. The results shown in the real world experiment is consistent with the simulation.
{"title":"More Realistic Website Fingerprinting Using Deep Learning","authors":"Weiqi Cui, Tao Chen, Chan-Tin Eric","doi":"10.1109/ICDCS47774.2020.00058","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00058","url":null,"abstract":"Website fingerprinting (WF) allows a passive local eavesdropper to monitor the encrypted channel where users search the Internet and determine which website the user is visiting from the recorded traffic. The effectiveness of using deep learning (DL) in WF attacks has been explored in recent work. However, they all are built and evaluated on one-page traces. Our goal is to explore whether deep learning can be used to handle the situations when the captured traces are not best-case for an adversary, such as partial traces and two-page traces. We aim to reduce the distance between the lab experiments and the realistic conditions. We evaluate our proposed method in both closed-world and open-world settings and found that Convolutional Neural Network (CNN) outperforms Long-Short Term Memory network (LSTM) in all scenarios. CNN also shows a great potential in predicting on a smaller number of packets. For partial trace missing 20% packets in the beginning of the trace, the accuracy is improved from 8.28% to 86.93% compared to the original DL model by adding the head detection. We then show the accuracy of predicting on two-page traces. With an overlap of 80% between two websites, we are able to achieve an accuracy of 89.25% and 74.2% for the first and second website in the closed-world evaluation, and 95.5% and 75% in the open world from our simulation. To verify our simulation results, we set up a crawler to collect both training and testing data and gathered the largest two-page traces testing dataset ever used. The results shown in the real world experiment is consistent with the simulation.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123195746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00085
Miaomiao Liu, Xianzhong Ding, Wan Du
This paper presents AdaVP, a continuous and real-time video processing system for mobile devices without offloading. AdaVP uses Deep Neural Network (DNN) based tools like YOLOv3 for object detection. Since DNN computation is time-consuming, multiple frames may be captured by the camera during the processing of one frame. To support real-time video processing, we develop a mobile parallel detection and tracking (MPDT) pipeline that executes object detection and tracking in parallel. When the object detector is processing a new frame, a light-weight object tracker is used to track the objects in the accumulated frames. As the tracking accuracy decreases gradually, due to the accumulation of tracking error and the appearance of new objects, new object detection results are used to calibrate the tracking accuracy periodically. In addition, a large DNN model produces high accuracy, but requires long processing latency, resulting in a great degradation for tracking accuracy. Based on our experiments, we find that the tracking accuracy degradation is also related to the variation of video content, e.g., for a dynamically changing video, the tracking accuracy degrades fast. A model adaptation algorithm is thus developed to adapt the DNN models according to the change rate of video content. We implement AdaVP on Jetson TX2 and conduct a variety of experiments on a large video dataset. The experiment results reveal that AdaVP improves the accuracy of the state-of-the-art solution by up to 43.9%.
{"title":"Continuous, Real-Time Object Detection on Mobile Devices without Offloading","authors":"Miaomiao Liu, Xianzhong Ding, Wan Du","doi":"10.1109/ICDCS47774.2020.00085","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00085","url":null,"abstract":"This paper presents AdaVP, a continuous and real-time video processing system for mobile devices without offloading. AdaVP uses Deep Neural Network (DNN) based tools like YOLOv3 for object detection. Since DNN computation is time-consuming, multiple frames may be captured by the camera during the processing of one frame. To support real-time video processing, we develop a mobile parallel detection and tracking (MPDT) pipeline that executes object detection and tracking in parallel. When the object detector is processing a new frame, a light-weight object tracker is used to track the objects in the accumulated frames. As the tracking accuracy decreases gradually, due to the accumulation of tracking error and the appearance of new objects, new object detection results are used to calibrate the tracking accuracy periodically. In addition, a large DNN model produces high accuracy, but requires long processing latency, resulting in a great degradation for tracking accuracy. Based on our experiments, we find that the tracking accuracy degradation is also related to the variation of video content, e.g., for a dynamically changing video, the tracking accuracy degrades fast. A model adaptation algorithm is thus developed to adapt the DNN models according to the change rate of video content. We implement AdaVP on Jetson TX2 and conduct a variety of experiments on a large video dataset. The experiment results reveal that AdaVP improves the accuracy of the state-of-the-art solution by up to 43.9%.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122870012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00115
A. Elrahman, M. Elhelw, Radwa El Shawi, S. Sakr
Nowadays, machine learning is playing a crucial role in harnessing the value of massive data amount currently produced every day. The process of building a high-quality machine learning model is an iterative, complex and time-consuming process that requires solid knowledge about the various machine learning algorithms in addition to having a good experience with effectively tuning their hyper-parameters. With the booming demand for machine learning applications, it has been recognized that the number of knowledgeable data scientists can not scale with the growing data volumes and application needs in our digital world. Therefore, recently, several automated machine learning (AutoML) frameworks have been developed by automating the process of Combined Algorithm Selection and Hyper-parameter tuning (CASH). However, a main limitation of these frameworks is that they have been built on top of centralized machine learning libraries (e.g. scikit-learn) that can only work on a single node and thus they are not scalable to process and handle large data volumes. To tackle this challenge, we demonstrate D-SmartML, a distributed AutoML framework on top of Apache Spark, a distributed data processing framework. Our framework is equipped with a meta learning mechanism for automated algorithm selection and supports three different automated hyper-parameter tuning techniques: distributed grid search, distributed random search and distributed hyperband optimization. We will demonstrate the scalability of our framework on handling large datasets. In addition, we will show how our framework outperforms the-state-of-the-art framework for distributed AutoML optimization, TransmogrifAI.
{"title":"D-SmartML: A Distributed Automated Machine Learning Framework","authors":"A. Elrahman, M. Elhelw, Radwa El Shawi, S. Sakr","doi":"10.1109/ICDCS47774.2020.00115","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00115","url":null,"abstract":"Nowadays, machine learning is playing a crucial role in harnessing the value of massive data amount currently produced every day. The process of building a high-quality machine learning model is an iterative, complex and time-consuming process that requires solid knowledge about the various machine learning algorithms in addition to having a good experience with effectively tuning their hyper-parameters. With the booming demand for machine learning applications, it has been recognized that the number of knowledgeable data scientists can not scale with the growing data volumes and application needs in our digital world. Therefore, recently, several automated machine learning (AutoML) frameworks have been developed by automating the process of Combined Algorithm Selection and Hyper-parameter tuning (CASH). However, a main limitation of these frameworks is that they have been built on top of centralized machine learning libraries (e.g. scikit-learn) that can only work on a single node and thus they are not scalable to process and handle large data volumes. To tackle this challenge, we demonstrate D-SmartML, a distributed AutoML framework on top of Apache Spark, a distributed data processing framework. Our framework is equipped with a meta learning mechanism for automated algorithm selection and supports three different automated hyper-parameter tuning techniques: distributed grid search, distributed random search and distributed hyperband optimization. We will demonstrate the scalability of our framework on handling large datasets. In addition, we will show how our framework outperforms the-state-of-the-art framework for distributed AutoML optimization, TransmogrifAI.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124826179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although the logically-centralized perspective is offered in Software-Defined Networks (SDNs), the data plane is still distributed in nature. Update commands sent by the centralized controller are executed asynchronously and independently in each switch. The timed SDNs enable synchronous and coordinate update operations as each update command can be triggered by a pre-defined time point. Prior work on timed update mainly focuses on how to produce a congestion-free update sequence, whereas finding a congestion-free timed update sequence may be too long to be applied in practice, even worse, such an update order may not exist. In this paper, we propose Chronus+, a timed update system that utilizes switch buffer to shorten the update time while minimizing the switch buffer size during updates. We formulate the Minimum Switch Buffer Size Problem (MSBSP) as an optimization program and show its hardness. A set of efficient algorithms is proposed to determine a timed update sequence in polynomial time. Extensive evaluations in Mininet and large-scale simulations show that Chronus+ can reduce the update time by at least 17% and the switch buffer by at least 27% compared with state-of-the-art approaches.
{"title":"Chronus+: Minimizing Switch Buffer Size during Network Updates in Timed SDNs","authors":"Xin He, Jiaqi Zheng, Haipeng Dai, Yuhu Sun, Wanchun Dou, Guihai Chen","doi":"10.1109/ICDCS47774.2020.00042","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00042","url":null,"abstract":"Although the logically-centralized perspective is offered in Software-Defined Networks (SDNs), the data plane is still distributed in nature. Update commands sent by the centralized controller are executed asynchronously and independently in each switch. The timed SDNs enable synchronous and coordinate update operations as each update command can be triggered by a pre-defined time point. Prior work on timed update mainly focuses on how to produce a congestion-free update sequence, whereas finding a congestion-free timed update sequence may be too long to be applied in practice, even worse, such an update order may not exist. In this paper, we propose Chronus+, a timed update system that utilizes switch buffer to shorten the update time while minimizing the switch buffer size during updates. We formulate the Minimum Switch Buffer Size Problem (MSBSP) as an optimization program and show its hardness. A set of efficient algorithms is proposed to determine a timed update sequence in polynomial time. Extensive evaluations in Mininet and large-scale simulations show that Chronus+ can reduce the update time by at least 17% and the switch buffer by at least 27% compared with state-of-the-art approaches.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123143565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00193
Masoomeh Javidi Kishi, R. Palmieri
In this paper we briefly present FPSI, a distributed transactional in-memory key-value store whose primary goal is to enable transactions to read more up-to-date (fresher) versions of shared objects than existing implementations of the well-known Parallel Snapshot Isolation (PSI) correctness level, in the absence of a synchronized clock service among nodes. FPSI builds upon Walter, an implementation of PSI well suited for social applications. The novel concurrency control at the core of FPSI allows its abort-free read-only transactions to access the latest version of objects upon their first contact to a node.
{"title":"On Reading Fresher Snapshots in Parallel Snapshot Isolation","authors":"Masoomeh Javidi Kishi, R. Palmieri","doi":"10.1109/ICDCS47774.2020.00193","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00193","url":null,"abstract":"In this paper we briefly present FPSI, a distributed transactional in-memory key-value store whose primary goal is to enable transactions to read more up-to-date (fresher) versions of shared objects than existing implementations of the well-known Parallel Snapshot Isolation (PSI) correctness level, in the absence of a synchronized clock service among nodes. FPSI builds upon Walter, an implementation of PSI well suited for social applications. The novel concurrency control at the core of FPSI allows its abort-free read-only transactions to access the latest version of objects upon their first contact to a node.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123706540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDCS47774.2020.00063
Ülkü Meteriz, Necip Fazil Yildiran, Joong-Hyo Kim, David A. Mohaisen
The extensive use of smartphones and wearable devices has facilitated many useful applications. For example, with Global Positioning System (GPS)-equipped smart and wearable devices, many applications can gather, process, and share rich metadata, such as geolocation, trajectories, elevation, and time. For example, fitness applications, such as Runkeeper and Strava, utilize information for activity tracking, and have recently witnessed a boom in popularity. Those fitness tracker applications have their own web platforms, and allow users to share activities on such platforms, or even with other social network platforms. To preserve privacy of users while allowing sharing, several of those platforms may allow users to disclose partial information, such as the elevation profile for an activity, which supposedly would not leak the location of the users. In this work, and as a cautionary tale, we create a proof of concept where we examine the extent to which elevation profiles can be used to predict the location of users. To tackle this problem, we devise three plausible threat settings under which the city or borough of the targets can be predicted. Those threat settings define the amount of information available to the adversary to launch the prediction attacks. Establishing that simple features of elevation profiles, e.g., spectral features, are insufficient, we devise both natural language processing (NLP)-inspired text-like representation and computer vision-inspired image-like representation of elevation profiles, and we convert the problem at hand into text and image classification problem. We use both traditional machine learning-and deep learning-based techniques, and achieve a prediction success rate ranging from 59.59% to 95.83%. The findings are alarming, and highlight that sharing elevation information may have significant location privacy risks.
{"title":"Understanding the Potential Risks of Sharing Elevation Information on Fitness Applications","authors":"Ülkü Meteriz, Necip Fazil Yildiran, Joong-Hyo Kim, David A. Mohaisen","doi":"10.1109/ICDCS47774.2020.00063","DOIUrl":"https://doi.org/10.1109/ICDCS47774.2020.00063","url":null,"abstract":"The extensive use of smartphones and wearable devices has facilitated many useful applications. For example, with Global Positioning System (GPS)-equipped smart and wearable devices, many applications can gather, process, and share rich metadata, such as geolocation, trajectories, elevation, and time. For example, fitness applications, such as Runkeeper and Strava, utilize information for activity tracking, and have recently witnessed a boom in popularity. Those fitness tracker applications have their own web platforms, and allow users to share activities on such platforms, or even with other social network platforms. To preserve privacy of users while allowing sharing, several of those platforms may allow users to disclose partial information, such as the elevation profile for an activity, which supposedly would not leak the location of the users. In this work, and as a cautionary tale, we create a proof of concept where we examine the extent to which elevation profiles can be used to predict the location of users. To tackle this problem, we devise three plausible threat settings under which the city or borough of the targets can be predicted. Those threat settings define the amount of information available to the adversary to launch the prediction attacks. Establishing that simple features of elevation profiles, e.g., spectral features, are insufficient, we devise both natural language processing (NLP)-inspired text-like representation and computer vision-inspired image-like representation of elevation profiles, and we convert the problem at hand into text and image classification problem. We use both traditional machine learning-and deep learning-based techniques, and achieve a prediction success rate ranging from 59.59% to 95.83%. The findings are alarming, and highlight that sharing elevation information may have significant location privacy risks.","PeriodicalId":158630,"journal":{"name":"2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)","volume":"202 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129756844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}