As a distributed system, Hadoop heavily relies on the network to complete data processing jobs. While Hadoop traffic is perceived to be critical for job execution performance, the actual behaviour of Hadoop network traffic is still poorly understood. This lack of understanding greatly complicates research relying on Hadoop workloads. In this paper, we explore Hadoop traffic through experimentation. We analyse the generated traffic of multiple types of MapReduce jobs, with varying input sizes, and cluster configuration parameters. As a result, we present Keddah, a toolchain for capturing, modelling and reproducing Hadoop traffic, for use with network simulators. Keddah can be used to create empirical Hadoop traffic models, enabling reproducible Hadoop research in more realistic scenarios.
{"title":"Keddah: Capturing Hadoop Network Behaviour","authors":"Jie Deng, Gareth Tyson, F. Cuadrado, S. Uhlig","doi":"10.1109/ICDCS.2017.211","DOIUrl":"https://doi.org/10.1109/ICDCS.2017.211","url":null,"abstract":"As a distributed system, Hadoop heavily relies on the network to complete data processing jobs. While Hadoop traffic is perceived to be critical for job execution performance, the actual behaviour of Hadoop network traffic is still poorly understood. This lack of understanding greatly complicates research relying on Hadoop workloads. In this paper, we explore Hadoop traffic through experimentation. We analyse the generated traffic of multiple types of MapReduce jobs, with varying input sizes, and cluster configuration parameters. As a result, we present Keddah, a toolchain for capturing, modelling and reproducing Hadoop traffic, for use with network simulators. Keddah can be used to create empirical Hadoop traffic models, enabling reproducible Hadoop research in more realistic scenarios.","PeriodicalId":127689,"journal":{"name":"2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS)","volume":"304 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134332127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Life without the Internet is no longer possible nor thinkable. Consider the effects of a prolonged Internet outage: In the least impactful way, most of our kids and peers just would no longer be able to interact with their peers. They might severely miss out on the quality of their leisure time activities which increasingly relies on social networks, online games, YouTube, and other online entertainment offers. This may be a nuisance but still is tolerable. More seriously and economically relevant, manufacturing and trade would no longer work as all interactions inside and among companies rely on a working Internet. Indeed, just-in-time ordering mechanisms and Internet of Things-enhanced production chains within the Industry 4.0 framework would no longer be operational as old-style communication means such as phone and faxes have completely been replaced. Indeed, neither of these alternative mechanisms—faxes, phone, and also messaging—would be available either as they also rely on Internet technology. Even worse, the control of critical infrastructures would also be affected severely as they increasingly rely on the Internet for gathering input data and propagating control information. Moreover, all big data analytic applications, including financial transactions, would fail as they can no longer gather and process their input data. Even worse, the fact that there is “no communication without energy” nowadays also means that the reciprocal statement applies that there is no “energy without communication.”
{"title":"Enabling Wide Area Data Analytics with Collaborative Distributed Processing Pipelines (CDPPs)","authors":"A. Feldmann, M. Hauswirth, V. Markl","doi":"10.1109/ICDCS.2017.332","DOIUrl":"https://doi.org/10.1109/ICDCS.2017.332","url":null,"abstract":"Life without the Internet is no longer possible nor thinkable. Consider the effects of a prolonged Internet outage: In the least impactful way, most of our kids and peers just would no longer be able to interact with their peers. They might severely miss out on the quality of their leisure time activities which increasingly relies on social networks, online games, YouTube, and other online entertainment offers. This may be a nuisance but still is tolerable. More seriously and economically relevant, manufacturing and trade would no longer work as all interactions inside and among companies rely on a working Internet. Indeed, just-in-time ordering mechanisms and Internet of Things-enhanced production chains within the Industry 4.0 framework would no longer be operational as old-style communication means such as phone and faxes have completely been replaced. Indeed, neither of these alternative mechanisms—faxes, phone, and also messaging—would be available either as they also rely on Internet technology. Even worse, the control of critical infrastructures would also be affected severely as they increasingly rely on the Internet for gathering input data and propagating control information. Moreover, all big data analytic applications, including financial transactions, would fail as they can no longer gather and process their input data. Even worse, the fact that there is “no communication without energy” nowadays also means that the reciprocal statement applies that there is no “energy without communication.”","PeriodicalId":127689,"journal":{"name":"2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117106956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maliheh Shirvanian, Stanislaw Jarecki, H. Krawczyk, Nitesh Saxena
Password managers (aka stores or vaults) allow a user to store and retrieve (usually high-entropy) passwords for her multiple password-protected services by interacting with a "device" serving the role of the manager (e.g., a smartphone or an online third-party service) on the basis of a single memorable (low-entropy) master password. Existing password managers work well to defeat offline dictionary attacks upon web service compromise, assuming the use of high-entropy passwords is enforced. However, they are vulnerable to leakage of all passwords in the event the device is compromised, due to the need to store the passwords encrypted under the master password and/or the need to input the master password to the device (as in smartphone managers). Evidence exists that password managers can be attractive attack targets. In this paper, we introduce a novel approach to password management, called SPHINX, which remains secure even when the password manager itself has been compromised. In SPHINX, the information stored on the device is information theoretically independent of the user's master password - an attacker breaking into the device learns no information about the master password or the user's site-specific passwords. Moreover, an attacker with full control of the device, even at the time the user interacts with it, learns nothing about the master password - the password is not entered into the device in plaintext form or in any other way that may leak information on it. Unlike existing managers, SPHINX produces strictly high-entropy passwords and makes it compulsory for the users to register these randomized passwords with the web services, hence fully defeating offline dictionary attack upon service compromise. The design and security of SPHINX is based on the device-enhanced PAKE model of Jarecki et al. that provides the theoretical basis for this construction and is backed by rigorous cryptographic proofs of security. While SPHINX is suitable for different device and online platforms, in this paper, we report on its concrete instantiation on smartphones given their popularity and trustworthiness as password managers (or even two-factor authentication). We present the design, implementation and performance evaluation of SPHINX, offering prototype browser plugins, smartphone apps and transparent device-client communication. Based on our inspection analysis, the overall user experience of SPHINX improves upon current managers. We also report on a lab-based usability study of SPHINX, which indicates that users' perception of SPHINX security and usability is high and satisfactory when compared to regular password-based authentication. Finally, we discuss how SPHINX may be extended to an online service for the purpose of back-up or as an independent password manager.
{"title":"SPHINX: A Password Store that Perfectly Hides Passwords from Itself","authors":"Maliheh Shirvanian, Stanislaw Jarecki, H. Krawczyk, Nitesh Saxena","doi":"10.1109/ICDCS.2017.64","DOIUrl":"https://doi.org/10.1109/ICDCS.2017.64","url":null,"abstract":"Password managers (aka stores or vaults) allow a user to store and retrieve (usually high-entropy) passwords for her multiple password-protected services by interacting with a \"device\" serving the role of the manager (e.g., a smartphone or an online third-party service) on the basis of a single memorable (low-entropy) master password. Existing password managers work well to defeat offline dictionary attacks upon web service compromise, assuming the use of high-entropy passwords is enforced. However, they are vulnerable to leakage of all passwords in the event the device is compromised, due to the need to store the passwords encrypted under the master password and/or the need to input the master password to the device (as in smartphone managers). Evidence exists that password managers can be attractive attack targets. In this paper, we introduce a novel approach to password management, called SPHINX, which remains secure even when the password manager itself has been compromised. In SPHINX, the information stored on the device is information theoretically independent of the user's master password - an attacker breaking into the device learns no information about the master password or the user's site-specific passwords. Moreover, an attacker with full control of the device, even at the time the user interacts with it, learns nothing about the master password - the password is not entered into the device in plaintext form or in any other way that may leak information on it. Unlike existing managers, SPHINX produces strictly high-entropy passwords and makes it compulsory for the users to register these randomized passwords with the web services, hence fully defeating offline dictionary attack upon service compromise. The design and security of SPHINX is based on the device-enhanced PAKE model of Jarecki et al. that provides the theoretical basis for this construction and is backed by rigorous cryptographic proofs of security. While SPHINX is suitable for different device and online platforms, in this paper, we report on its concrete instantiation on smartphones given their popularity and trustworthiness as password managers (or even two-factor authentication). We present the design, implementation and performance evaluation of SPHINX, offering prototype browser plugins, smartphone apps and transparent device-client communication. Based on our inspection analysis, the overall user experience of SPHINX improves upon current managers. We also report on a lab-based usability study of SPHINX, which indicates that users' perception of SPHINX security and usability is high and satisfactory when compared to regular password-based authentication. Finally, we discuss how SPHINX may be extended to an online service for the purpose of back-up or as an independent password manager.","PeriodicalId":127689,"journal":{"name":"2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123996432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Network function virtualization (NFV) represents the latest technology advancement in network service provisioning. Traditional hardware middleboxes are replaced by software programs running on industry standard servers and virtual machines, for service agility, flexibility, and cost reduction. NFV users are provisioned with service chains composed of virtual network functions (VNFs). A fundamental problem in NFV service chain provisioning is to satisfy user demands with minimum system-wide cost. We jointly consider two types of cost in this work: nodal resource cost and link delay cost, and formulate the service chain provisioning problem using nonlinear optimization. Through the method of auxiliary variables, we transform the optimization problem into its separable form, and then apply the alternating direction method of multipliers (ADMM) to design scalable and fully distributed solutions. Through simulation studies, we verify the convergence and efficacy of our distributed algorithm design.
网络功能虚拟化(Network function virtualization, NFV)代表了网络业务提供的最新技术进展。传统的硬件中间件被运行在行业标准服务器和虚拟机上的软件程序所取代,以实现服务敏捷性、灵活性和降低成本。NFV为用户提供由虚拟网络功能(VNFs)组成的服务链。在NFV服务链的配置中,一个基本问题是如何以最小的系统成本来满足用户需求。本文综合考虑节点资源成本和链路延迟成本两种成本类型,采用非线性优化的方法求解服务链的配置问题。通过辅助变量法,将优化问题转化为可分形式,然后应用乘法器的交替方向法(ADMM)设计可伸缩的全分布解。通过仿真研究,验证了分布式算法设计的收敛性和有效性。
{"title":"A Scalable and Distributed Approach for NFV Service Chain Cost Minimization","authors":"Zijun Zhang, Zongpeng Li, Chuan Wu, Chuanhe Huang","doi":"10.1109/ICDCS.2017.210","DOIUrl":"https://doi.org/10.1109/ICDCS.2017.210","url":null,"abstract":"Network function virtualization (NFV) represents the latest technology advancement in network service provisioning. Traditional hardware middleboxes are replaced by software programs running on industry standard servers and virtual machines, for service agility, flexibility, and cost reduction. NFV users are provisioned with service chains composed of virtual network functions (VNFs). A fundamental problem in NFV service chain provisioning is to satisfy user demands with minimum system-wide cost. We jointly consider two types of cost in this work: nodal resource cost and link delay cost, and formulate the service chain provisioning problem using nonlinear optimization. Through the method of auxiliary variables, we transform the optimization problem into its separable form, and then apply the alternating direction method of multipliers (ADMM) to design scalable and fully distributed solutions. Through simulation studies, we verify the convergence and efficacy of our distributed algorithm design.","PeriodicalId":127689,"journal":{"name":"2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124813246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sen Liu, Jiawei Huang, Yutao Zhou, Jianxin Wang, T. He
In modern data centers, many flow-based and task-based schemes have been proposed to speed up the data transmission in order to provide fast, reliable services for millions of users. However, existing flow-based schemes treat all flows in isolation, contributing less to or even hurting user experience due to the stalled flows. Other prevalent task-based approaches, such as centralized and decentralized scheduling, are sophisticated or unable to share task information. In this work, we first reveal that relinquishing bandwidth of leading flows to the stalled ones effectively reduces the task completion time. We further present the design and implementation of a general supporting scheme that shares the flow-tardiness information through a receiver-driven coordination. Our scheme can be flexibly and widely integrated with the state-of-the-art TCP protocols designed for data centers, while making no modification on switches. Through the testbed experiments and simulations of typical data center applications, we show that our scheme reduces the task completion time by 70% and 50% compared with the flow-based protocols (e.g. DCTCP, L2DCT) and task-based scheduling (e.g. Baraat), respectively. Moreover, our scheme also outperforms other approaches by 18% to 25% in prevalent topologies of data center.
{"title":"Task-aware TCP in Data Center Networks","authors":"Sen Liu, Jiawei Huang, Yutao Zhou, Jianxin Wang, T. He","doi":"10.1109/ICDCS.2017.175","DOIUrl":"https://doi.org/10.1109/ICDCS.2017.175","url":null,"abstract":"In modern data centers, many flow-based and task-based schemes have been proposed to speed up the data transmission in order to provide fast, reliable services for millions of users. However, existing flow-based schemes treat all flows in isolation, contributing less to or even hurting user experience due to the stalled flows. Other prevalent task-based approaches, such as centralized and decentralized scheduling, are sophisticated or unable to share task information. In this work, we first reveal that relinquishing bandwidth of leading flows to the stalled ones effectively reduces the task completion time. We further present the design and implementation of a general supporting scheme that shares the flow-tardiness information through a receiver-driven coordination. Our scheme can be flexibly and widely integrated with the state-of-the-art TCP protocols designed for data centers, while making no modification on switches. Through the testbed experiments and simulations of typical data center applications, we show that our scheme reduces the task completion time by 70% and 50% compared with the flow-based protocols (e.g. DCTCP, L2DCT) and task-based scheduling (e.g. Baraat), respectively. Moreover, our scheme also outperforms other approaches by 18% to 25% in prevalent topologies of data center.","PeriodicalId":127689,"journal":{"name":"2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS)","volume":"227 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123035397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A network of drone cameras can be deployed to cover live events, such as high-action sports game played on a large field, but managing networked drone cameras in real-time is challenging. Distributed approaches yield suboptimal solutions from lack of coordination but coordination with a centralized controller incurs round-trip latencies of several hundreds of milliseconds over a wireless channel. We propose a fog-networking based system architecture to automatically coordinate a network of drones equipped with cameras to capture and broadcast the dynamically changing scenes of interest in a sports game. We design both optimal and practical algorithms to balance the tradeoff between two metrics: coverage of the most important scenes and streamed video bitrate. To compensate for network round-trip latencies, the centralized controller uses a predictive approach to predict which locations the drones should cover next. The controller maximizes video bitrate by associating each drone to an optimally matched server and dynamically re-assigns drones as relay nodes to boost the throughput in low-throughput scenarios. This dynamic assignment at centralized controller occurs at slower time-scale permitted by round-trip latencies, while the predictive approach and drones’ local decision ensures that the system works in real-time. Experimental results over tens of flights on the field suggest our system can achieve really good performance, for example, 8 drones can achieve a tradeoff of 94% coverage and (on average) 2K video support at 20 Mbps by optimizing between coverage and throughput. By dynamically allocating drones to cover the game or act as relays, our system also demonstrates a 2x gain over systems maximizing static coverage alone that achieves only 9 Mbps video throughput.
{"title":"Networked Drone Cameras for Sports Streaming","authors":"Xiaoli Wang, Aakanksha Chowdhery, M. Chiang","doi":"10.1109/ICDCS.2017.200","DOIUrl":"https://doi.org/10.1109/ICDCS.2017.200","url":null,"abstract":"A network of drone cameras can be deployed to cover live events, such as high-action sports game played on a large field, but managing networked drone cameras in real-time is challenging. Distributed approaches yield suboptimal solutions from lack of coordination but coordination with a centralized controller incurs round-trip latencies of several hundreds of milliseconds over a wireless channel. We propose a fog-networking based system architecture to automatically coordinate a network of drones equipped with cameras to capture and broadcast the dynamically changing scenes of interest in a sports game. We design both optimal and practical algorithms to balance the tradeoff between two metrics: coverage of the most important scenes and streamed video bitrate. To compensate for network round-trip latencies, the centralized controller uses a predictive approach to predict which locations the drones should cover next. The controller maximizes video bitrate by associating each drone to an optimally matched server and dynamically re-assigns drones as relay nodes to boost the throughput in low-throughput scenarios. This dynamic assignment at centralized controller occurs at slower time-scale permitted by round-trip latencies, while the predictive approach and drones’ local decision ensures that the system works in real-time. Experimental results over tens of flights on the field suggest our system can achieve really good performance, for example, 8 drones can achieve a tradeoff of 94% coverage and (on average) 2K video support at 20 Mbps by optimizing between coverage and throughput. By dynamically allocating drones to cover the game or act as relays, our system also demonstrates a 2x gain over systems maximizing static coverage alone that achieves only 9 Mbps video throughput.","PeriodicalId":127689,"journal":{"name":"2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123059450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yoann Desmouceaux, P. Pfister, Jerome Tollet, M. Townsley, T. Clausen
Network load-balancers generally either do not take application state into account, or do so at the cost of a centralized monitoring system. This paper introduces a load-balancer running exclusively within the IP forwarding plane, i.e. in an application protocol agnostic fashion - yet which still provides application-awareness and makes real-time, decentralized decisions. To that end, IPv6 Segment Routing is used to direct data packets from a new flow through a chain of candidate servers, until one decides to accept the connection, based on its local state. This way, applications themselves naturally decide on how to share incoming connections, while incurring minimal network overhead, and no out-of-band signaling. Tests on different workloads - including realistic workloads such as replaying actual Wikipedia access traffic towards a set of replica Wikipedia instances - show significant performance benefits, in terms of shorter response times, when compared to a traditional random load-balancer.
{"title":"SRLB: The Power of Choices in Load Balancing with Segment Routing","authors":"Yoann Desmouceaux, P. Pfister, Jerome Tollet, M. Townsley, T. Clausen","doi":"10.1109/ICDCS.2017.180","DOIUrl":"https://doi.org/10.1109/ICDCS.2017.180","url":null,"abstract":"Network load-balancers generally either do not take application state into account, or do so at the cost of a centralized monitoring system. This paper introduces a load-balancer running exclusively within the IP forwarding plane, i.e. in an application protocol agnostic fashion - yet which still provides application-awareness and makes real-time, decentralized decisions. To that end, IPv6 Segment Routing is used to direct data packets from a new flow through a chain of candidate servers, until one decides to accept the connection, based on its local state. This way, applications themselves naturally decide on how to share incoming connections, while incurring minimal network overhead, and no out-of-band signaling. Tests on different workloads - including realistic workloads such as replaying actual Wikipedia access traffic towards a set of replica Wikipedia instances - show significant performance benefits, in terms of shorter response times, when compared to a traditional random load-balancer.","PeriodicalId":127689,"journal":{"name":"2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128142274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amardeep Mehta, R. Baddour, Fredrik Svensson, H. Gustafsson, E. Elmroth
Calvin is an IoT framework for application development, deployment and execution in heterogeneous environments, that includes clouds, edge resources, and embedded or constrained resources. Inside Calvin, all the distributed resources are viewed as one environment by the application. The framework provides multi-tenancy and simplifies development of IoT applications, which are represented using a dataflow of application components (named actors) and their communication. The idea behind Calvin poses similarity with the serverless architecture and can be seen as Actor as a Service instead of Function as a Service. This makes Calvin very powerful as it does not only scale actors quickly but also provides an easy actor migration capability. In this work, we propose Calvin Constrained, an extension to the Calvin framework to cover resource-constrained devices. Due to limited memory and processing power of embedded devices, the constrained side of the framework can only support a limited subset of the Calvin features. The current implementation of Calvin Constrained supports actors implemented in C as well as Python, where the support for Python actors is enabled by using MicroPython as a statically allocated library, by this we enable the automatic management of state variables and enhance code re-usability. As would be expected, Python-coded actors demand more resources over C-coded ones. We show that the extra resources needed are manageable on current off-the-shelve micro-controller-equipped devices when using the Calvin framework.
{"title":"Calvin Constrained — A Framework for IoT Applications in Heterogeneous Environments","authors":"Amardeep Mehta, R. Baddour, Fredrik Svensson, H. Gustafsson, E. Elmroth","doi":"10.1109/ICDCS.2017.181","DOIUrl":"https://doi.org/10.1109/ICDCS.2017.181","url":null,"abstract":"Calvin is an IoT framework for application development, deployment and execution in heterogeneous environments, that includes clouds, edge resources, and embedded or constrained resources. Inside Calvin, all the distributed resources are viewed as one environment by the application. The framework provides multi-tenancy and simplifies development of IoT applications, which are represented using a dataflow of application components (named actors) and their communication. The idea behind Calvin poses similarity with the serverless architecture and can be seen as Actor as a Service instead of Function as a Service. This makes Calvin very powerful as it does not only scale actors quickly but also provides an easy actor migration capability. In this work, we propose Calvin Constrained, an extension to the Calvin framework to cover resource-constrained devices. Due to limited memory and processing power of embedded devices, the constrained side of the framework can only support a limited subset of the Calvin features. The current implementation of Calvin Constrained supports actors implemented in C as well as Python, where the support for Python actors is enabled by using MicroPython as a statically allocated library, by this we enable the automatic management of state variables and enhance code re-usability. As would be expected, Python-coded actors demand more resources over C-coded ones. We show that the extra resources needed are manageable on current off-the-shelve micro-controller-equipped devices when using the Calvin framework.","PeriodicalId":127689,"journal":{"name":"2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128621881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amy Babay, C. Danilov, John Lane, Michal Miskin-Amir, Daniel Obenshain, John L. Schultz, J. Stanton, Thomas Tantillo, Y. Amir
The dramatic success and scaling of the Internet was made possible by the core principle of keeping it simple in the middle and smart at the edge (or the end-to-end principle). However, new applications bring new demands, and for many emerging applications, the Internet paradigm presents limitations. For applications in this new generation of Internet services, structured overlay networks offer a powerful framework for deploying specialized protocols that can provide new capabilities beyond what the Internet natively supports by leveraging global state and in-network processing. The structured overlay concept includes three principles: A resilient network architecture, a flexible overlay node software architecture that exploits global state and unlimited programmability, and flow-based processing. We demonstrate the effectiveness of structured overlay networks in supporting today's demanding applications and propose forward-looking ideas for leveraging the framework to develop protocols that push the boundaries of what is possible in terms of performance and resilience.
{"title":"Structured Overlay Networks for a New Generation of Internet Services","authors":"Amy Babay, C. Danilov, John Lane, Michal Miskin-Amir, Daniel Obenshain, John L. Schultz, J. Stanton, Thomas Tantillo, Y. Amir","doi":"10.1109/ICDCS.2017.119","DOIUrl":"https://doi.org/10.1109/ICDCS.2017.119","url":null,"abstract":"The dramatic success and scaling of the Internet was made possible by the core principle of keeping it simple in the middle and smart at the edge (or the end-to-end principle). However, new applications bring new demands, and for many emerging applications, the Internet paradigm presents limitations. For applications in this new generation of Internet services, structured overlay networks offer a powerful framework for deploying specialized protocols that can provide new capabilities beyond what the Internet natively supports by leveraging global state and in-network processing. The structured overlay concept includes three principles: A resilient network architecture, a flexible overlay node software architecture that exploits global state and unlimited programmability, and flow-based processing. We demonstrate the effectiveness of structured overlay networks in supporting today's demanding applications and propose forward-looking ideas for leveraging the framework to develop protocols that push the boundaries of what is possible in terms of performance and resilience.","PeriodicalId":127689,"journal":{"name":"2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123750078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vital signs, such as respiration and heartbeat, are useful to health monitoring since such signals provide important clues of medical conditions. Effective solutions are needed to provide contact-free, easy deployment, low-cost, and long-term vital sign monitoring. In this paper, we present PhaseBeat to exploit channel state information (CSI) phase difference data to monitor breathing and heartbeat with commodity WiFi devices. We provide a rigorous analysis of the CSI phase difference data with respect to its stability and periodicity. Based on the analysis, we design and implement the PhaseBeat system with off-the-shelf WiFi devices, and conduct an extensive experimental study to validate its performance. Our experimental results demonstrate the superior performance of PhaseBeat over existing approaches in various indoor environments.
{"title":"PhaseBeat: Exploiting CSI Phase Data for Vital Sign Monitoring with Commodity WiFi Devices","authors":"Xuyu Wang, Chao Yang, S. Mao","doi":"10.1109/ICDCS.2017.206","DOIUrl":"https://doi.org/10.1109/ICDCS.2017.206","url":null,"abstract":"Vital signs, such as respiration and heartbeat, are useful to health monitoring since such signals provide important clues of medical conditions. Effective solutions are needed to provide contact-free, easy deployment, low-cost, and long-term vital sign monitoring. In this paper, we present PhaseBeat to exploit channel state information (CSI) phase difference data to monitor breathing and heartbeat with commodity WiFi devices. We provide a rigorous analysis of the CSI phase difference data with respect to its stability and periodicity. Based on the analysis, we design and implement the PhaseBeat system with off-the-shelf WiFi devices, and conduct an extensive experimental study to validate its performance. Our experimental results demonstrate the superior performance of PhaseBeat over existing approaches in various indoor environments.","PeriodicalId":127689,"journal":{"name":"2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128544434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}