Cloud providers are auctioning their excess capacity using dynamically priced virtual instances. These spot instances provide significant savings compared to on-demand or fixed price instances. The users willing to use these resources are asked to provide a maximum bid price per hour, and the cloud provider runs the instances as long as the market price is below the user's bid price. By using such resources, the users are exposed explicitly to failures and need to adapt their applications to provide some level of fault tolerance. In this paper we expose the effect of bidding in the case of virtual HPC clusters composed of spot instances. We describe the interesting effect of uniform versus non-uniform bidding, in terms of failure rate and failure model. We propose an initial attempt to deal with the problem of predicting the runtime of a parallel application under various bidding strategies and various system parameters. We describe the relationship between bidding strategies and programming models. We build a preliminary optimization model that uses real price traces from Amazon Web Services as inputs, as well as instrumented values related to the processing and network capacities of clusters instances on the EC2 services. Our results show preliminary insights into the relationship between non-uniform bidding and application scaling strategies.
云提供商正在使用动态定价的虚拟实例拍卖他们的过剩容量。与按需或固定价格实例相比,这些现货实例提供了显著的节省。愿意使用这些资源的用户被要求提供每小时的最高出价,只要市场价格低于用户的出价,云提供商就会运行这些实例。通过使用这些资源,用户将显式地暴露于故障,并且需要调整其应用程序以提供某种程度的容错。本文揭示了竞价对由现货实例组成的虚拟高性能计算集群的影响。我们从失败率和失败模型的角度描述了统一和非统一招标的有趣效果。我们提出了一个初步的尝试来处理在各种投标策略和各种系统参数下预测并行应用程序运行时的问题。我们描述了投标策略和规划模型之间的关系。我们构建了一个初步的优化模型,该模型使用来自Amazon Web Services的真实价格轨迹作为输入,以及与EC2服务上集群实例的处理和网络容量相关的仪器值。我们的研究结果初步揭示了非统一竞价与应用程序扩展策略之间的关系。
{"title":"Banking on Decoupling: Budget-Driven Sustainability for HPC Applications on EC2 Spot Instances","authors":"Moussa Taifi","doi":"10.1109/SRDS.2012.11","DOIUrl":"https://doi.org/10.1109/SRDS.2012.11","url":null,"abstract":"Cloud providers are auctioning their excess capacity using dynamically priced virtual instances. These spot instances provide significant savings compared to on-demand or fixed price instances. The users willing to use these resources are asked to provide a maximum bid price per hour, and the cloud provider runs the instances as long as the market price is below the user's bid price. By using such resources, the users are exposed explicitly to failures and need to adapt their applications to provide some level of fault tolerance. In this paper we expose the effect of bidding in the case of virtual HPC clusters composed of spot instances. We describe the interesting effect of uniform versus non-uniform bidding, in terms of failure rate and failure model. We propose an initial attempt to deal with the problem of predicting the runtime of a parallel application under various bidding strategies and various system parameters. We describe the relationship between bidding strategies and programming models. We build a preliminary optimization model that uses real price traces from Amazon Web Services as inputs, as well as instrumented values related to the processing and network capacities of clusters instances on the EC2 services. Our results show preliminary insights into the relationship between non-uniform bidding and application scaling strategies.","PeriodicalId":447700,"journal":{"name":"2012 IEEE 31st Symposium on Reliable Distributed Systems","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128004474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Broadcast authentication is an important security mechanism for resource constrained devices, like Wireless Sensor Networks (WSNs). In this paper we revise how broadcast authentication has been enforced in this context, and we show that most of the current implementations (generally based on lightweight hash chain implementing time limited validity of the authentication property) leave open the possibility of a dreadful attack. We detail such an attack, and propose three different protocols to cope with it: PASS, TASS, and PTASS. We further analyze the overhead introduced by these protocols in terms of set-up, transmission overhead, and on device verification.
{"title":"Broadcast Authentication for Resource Constrained Devices: A Major Pitfall and Some Solutions","authors":"R. D. Pietro, F. Martinelli, Nino Vincenzo Verde","doi":"10.1109/SRDS.2012.13","DOIUrl":"https://doi.org/10.1109/SRDS.2012.13","url":null,"abstract":"Broadcast authentication is an important security mechanism for resource constrained devices, like Wireless Sensor Networks (WSNs). In this paper we revise how broadcast authentication has been enforced in this context, and we show that most of the current implementations (generally based on lightweight hash chain implementing time limited validity of the authentication property) leave open the possibility of a dreadful attack. We detail such an attack, and propose three different protocols to cope with it: PASS, TASS, and PTASS. We further analyze the overhead introduced by these protocols in terms of set-up, transmission overhead, and on device verification.","PeriodicalId":447700,"journal":{"name":"2012 IEEE 31st Symposium on Reliable Distributed Systems","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132672249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traditionally, (nonmasking and masking) fault tolerance has focused on ensuring that after the occurrence of faults, the program recovers to states from where it continues to satisfy its original specification. However, a problem with this limited notion is that, in some cases, it may be impossible to recover to states from where the entire original specification is satisfied. For this reason, one can consider a fault-tolerant graceful-degradation program that ensures that upon the occurrence of faults, the program recovers to states from where a (given) subset of its specification is satisfied. Typically, the subset of specification satisfied thus would be the critical requirements. In this paper, we focus on automatically revising a given program to obtain a corresponding graceful program, i.e., a program that satisfies a weaker specification. Specifically, this step involves adding new behaviors that satisfy the given subset of specification. Moreover, it ensures that during this process, it does not remove any behavior from the original program. With this motivation, in this paper, we focus on automatic derivation of the graceful program, i.e., a program that contains all behaviors of the original program and some new behaviors that satisfy the weaker conditions. We note that this aspect differentiates this work from previous work on controller synthesis as well as automated addition of fault tolerance in that this work requires that no new behaviors are added in the absence of faults.
{"title":"Automatic Generation of Graceful Programs","authors":"Yiyan Lin, S. Kulkarni","doi":"10.1109/SRDS.2012.8","DOIUrl":"https://doi.org/10.1109/SRDS.2012.8","url":null,"abstract":"Traditionally, (nonmasking and masking) fault tolerance has focused on ensuring that after the occurrence of faults, the program recovers to states from where it continues to satisfy its original specification. However, a problem with this limited notion is that, in some cases, it may be impossible to recover to states from where the entire original specification is satisfied. For this reason, one can consider a fault-tolerant graceful-degradation program that ensures that upon the occurrence of faults, the program recovers to states from where a (given) subset of its specification is satisfied. Typically, the subset of specification satisfied thus would be the critical requirements. In this paper, we focus on automatically revising a given program to obtain a corresponding graceful program, i.e., a program that satisfies a weaker specification. Specifically, this step involves adding new behaviors that satisfy the given subset of specification. Moreover, it ensures that during this process, it does not remove any behavior from the original program. With this motivation, in this paper, we focus on automatic derivation of the graceful program, i.e., a program that contains all behaviors of the original program and some new behaviors that satisfy the weaker conditions. We note that this aspect differentiates this work from previous work on controller synthesis as well as automated addition of fault tolerance in that this work requires that no new behaviors are added in the absence of faults.","PeriodicalId":447700,"journal":{"name":"2012 IEEE 31st Symposium on Reliable Distributed Systems","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127152552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a distributed TDMA negotiation approach for single-hop ad-hoc network communication. It is distributed, resilient to arbitrary transient packet loss and defines a non-overlapping TDMA schedule without the need of global time synchronization. A participating node can dynamically request a fraction of the static TDMA period T. It will receive its fraction if enough time resources are available. In any case, every node can request and will receive at least a fair fraction of size 1/N. Due to its resilience to arbitrary transient packet loss, the algorithm is well suited for lossy networks like found in wireless communications. Our approach is designed to work in highly dynamic scenarios efficiently. We will show, that it defines a dynamic non-overlapping TDMA schedule even at high packet loss rates. The performance of the TDMA negotiation is analyzed by simulation and compared to results of related work.
{"title":"RD2: Resilient Dynamic Desynchronization for TDMA over Lossy Networks","authors":"T. Hinterhofer, H. Schwefel, S. Tomic","doi":"10.1109/SRDS.2012.57","DOIUrl":"https://doi.org/10.1109/SRDS.2012.57","url":null,"abstract":"We present a distributed TDMA negotiation approach for single-hop ad-hoc network communication. It is distributed, resilient to arbitrary transient packet loss and defines a non-overlapping TDMA schedule without the need of global time synchronization. A participating node can dynamically request a fraction of the static TDMA period T. It will receive its fraction if enough time resources are available. In any case, every node can request and will receive at least a fair fraction of size 1/N. Due to its resilience to arbitrary transient packet loss, the algorithm is well suited for lossy networks like found in wireless communications. Our approach is designed to work in highly dynamic scenarios efficiently. We will show, that it defines a dynamic non-overlapping TDMA schedule even at high packet loss rates. The performance of the TDMA negotiation is analyzed by simulation and compared to results of related work.","PeriodicalId":447700,"journal":{"name":"2012 IEEE 31st Symposium on Reliable Distributed Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127172945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Correia, Pedro Costa, Marcelo Pasin, A. Bessani, Fernando M. V. Ramos, P. Veríssimo
MapReduce is a framework for processing large data sets largely used in cloud computing. MapReduce implementations like Hadoop can tolerate crashes and file corruptions, but there is evidence that general arbitrary faults do occur and can affect the correctness of job executions. Furthermore, many individual cloud outages have been reported, raising concerns about depending on a single cloud. We present a MapReduce runtime that tolerates arbitrary faults and runs in a set of clouds at a reasonable cost in terms of computation and execution time. The main challenge is to avoid sending through the internet the huge amount of data that would normally be exchanged between map and reduce tasks.
{"title":"On the Feasibility of Byzantine Fault-Tolerant MapReduce in Clouds-of-Clouds","authors":"M. Correia, Pedro Costa, Marcelo Pasin, A. Bessani, Fernando M. V. Ramos, P. Veríssimo","doi":"10.1109/SRDS.2012.46","DOIUrl":"https://doi.org/10.1109/SRDS.2012.46","url":null,"abstract":"MapReduce is a framework for processing large data sets largely used in cloud computing. MapReduce implementations like Hadoop can tolerate crashes and file corruptions, but there is evidence that general arbitrary faults do occur and can affect the correctness of job executions. Furthermore, many individual cloud outages have been reported, raising concerns about depending on a single cloud. We present a MapReduce runtime that tolerates arbitrary faults and runs in a set of clouds at a reasonable cost in terms of computation and execution time. The main challenge is to avoid sending through the internet the huge amount of data that would normally be exchanged between map and reduce tasks.","PeriodicalId":447700,"journal":{"name":"2012 IEEE 31st Symposium on Reliable Distributed Systems","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124006619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Web applications are increasingly used as portals to interact with back-end database systems and support business processes. This type of data-centric workflow-driven web application is vulnerable to two types of security threats. The first is an request integrity attack, which stems from the vulnerabilities in the implementation of business logic within web applications. The second is guideline violation, which stems from privilege misuse in scenarios where business logic and policies are too complex to be accurately defined and enforced. Both threats can lead to sequences of web requests that deviate from typical user behaviors. The objective of this paper is to detect anomalous user behaviors based on the sequence of their requests within a web session. We first decompose web sessions into workflows based on their data objects. In doing so, the detection of anomalous sessions is reduced to detection of anomalous workflows. Next, we apply a hidden Markov model (HMM) to characterize workflows on a per-object basis. In this model, the implicit business logic involved in this object defines the unobserved states of the Markov process, where the web requests are observations. To derive more robust HMMs, we extend the object-specific approach to an object-cluster approach, where objects with similar workflows are clustered and HMM models are derived on a per-cluster basis. We evaluate our models using two real systems, including an open source web application and a large web-based electronic medical record system. The results show that our approach can detect anomalous web sessions and lend evidence to suggest that the clustering approach can achieve relatively low false positive rates while maintaining its detection accuracy.
{"title":"Detecting Anomalous User Behaviors in Workflow-Driven Web Applications","authors":"Xiaowei Li, Yuan Xue, B. Malin","doi":"10.1109/SRDS.2012.19","DOIUrl":"https://doi.org/10.1109/SRDS.2012.19","url":null,"abstract":"Web applications are increasingly used as portals to interact with back-end database systems and support business processes. This type of data-centric workflow-driven web application is vulnerable to two types of security threats. The first is an request integrity attack, which stems from the vulnerabilities in the implementation of business logic within web applications. The second is guideline violation, which stems from privilege misuse in scenarios where business logic and policies are too complex to be accurately defined and enforced. Both threats can lead to sequences of web requests that deviate from typical user behaviors. The objective of this paper is to detect anomalous user behaviors based on the sequence of their requests within a web session. We first decompose web sessions into workflows based on their data objects. In doing so, the detection of anomalous sessions is reduced to detection of anomalous workflows. Next, we apply a hidden Markov model (HMM) to characterize workflows on a per-object basis. In this model, the implicit business logic involved in this object defines the unobserved states of the Markov process, where the web requests are observations. To derive more robust HMMs, we extend the object-specific approach to an object-cluster approach, where objects with similar workflows are clustered and HMM models are derived on a per-cluster basis. We evaluate our models using two real systems, including an open source web application and a large web-based electronic medical record system. The results show that our approach can detect anomalous web sessions and lend evidence to suggest that the clustering approach can achieve relatively low false positive rates while maintaining its detection accuracy.","PeriodicalId":447700,"journal":{"name":"2012 IEEE 31st Symposium on Reliable Distributed Systems","volume":"200 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127319356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We design, implement, and evaluate a middleware system, HybCAST, that leverages a hybrid cellular and ad hoc network to disseminate rich contents from a source to all mobile devices in a predetermined region. HybCAST targets information dissemination over a range of scenarios (e.g., military operations, crisis alerting, and popular sporting events) in which high reliability and low latency are critical and existing fixed infrastructures such as wired networks, 802.11 access points are heavily loaded or partially destroyed. HybCAST implements a suite of protocols that: (i) structures the hybrid network into a hierarchy of two-level ad hoc clusters for better scalability, (ii) employ both data push and pull mechanisms for high reliability and low latency dissemination of rich content, and (iii) implement a near-optimal gateway selection algorithm to minimize the transmission redundancy. To demonstrate its practicality and efficiency, we have implemented and deployed the HybCAST middleware on several Android smart phones and an in-network Linux machine that acts as a dissemination server. The system is evaluated via real experiments using a UMTS network and extensive packet-level simulations. Our experimental results from a live network show that HybCAST achieves 100% reliability with shorter latencies and lower overall energy consumption. Simulation results confirm that HybCAST outperforms other state-of-the-art systems in the literature. For example, HybCAST exhibits a 5 times reduction in the dissemination latencies as compared to other hybrid dissemination protocols, while its energy consumption is a third of a cellular-only dissemination system. Furthermore, the simulation results demonstrate that HybCAST scales well and maintains good performance under varying numbers of mobile devices, diverse content sizes, and device mobility.
{"title":"HybCAST: Rich Content Dissemination in Hybrid Cellular and 802.11 Ad Hoc Networks","authors":"N. Do, Cheng-Hsin Hsu, N. Venkatasubramanian","doi":"10.1109/SRDS.2012.36","DOIUrl":"https://doi.org/10.1109/SRDS.2012.36","url":null,"abstract":"We design, implement, and evaluate a middleware system, HybCAST, that leverages a hybrid cellular and ad hoc network to disseminate rich contents from a source to all mobile devices in a predetermined region. HybCAST targets information dissemination over a range of scenarios (e.g., military operations, crisis alerting, and popular sporting events) in which high reliability and low latency are critical and existing fixed infrastructures such as wired networks, 802.11 access points are heavily loaded or partially destroyed. HybCAST implements a suite of protocols that: (i) structures the hybrid network into a hierarchy of two-level ad hoc clusters for better scalability, (ii) employ both data push and pull mechanisms for high reliability and low latency dissemination of rich content, and (iii) implement a near-optimal gateway selection algorithm to minimize the transmission redundancy. To demonstrate its practicality and efficiency, we have implemented and deployed the HybCAST middleware on several Android smart phones and an in-network Linux machine that acts as a dissemination server. The system is evaluated via real experiments using a UMTS network and extensive packet-level simulations. Our experimental results from a live network show that HybCAST achieves 100% reliability with shorter latencies and lower overall energy consumption. Simulation results confirm that HybCAST outperforms other state-of-the-art systems in the literature. For example, HybCAST exhibits a 5 times reduction in the dissemination latencies as compared to other hybrid dissemination protocols, while its energy consumption is a third of a cellular-only dissemination system. Furthermore, the simulation results demonstrate that HybCAST scales well and maintains good performance under varying numbers of mobile devices, diverse content sizes, and device mobility.","PeriodicalId":447700,"journal":{"name":"2012 IEEE 31st Symposium on Reliable Distributed Systems","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115464473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Supervising a system in operation allows to detect a violation of system specification or temporal properties, and is the first step required by any reconfiguration mechanism. In this work, we focus on run-time verification of temporal system properties in distributed and real-time systems. Based on a description of a property that includes events and temporal constraints, expressed as an arc timed Petri net, we automatically derive a monitoring system responsible for checking this property. The proposed approach enables the distributed verification of system properties. Our contribution is twofold. On the theoretical side, we introduce a slight modification of the semantics of Petri nets to be able to execute it in partial executions and noisy observation environments. On the practical side, we show how to use this formal framework to provide a distributed and efficient monitoring system, and describe its current implementation.
{"title":"Distributed Monitoring of Temporal System Properties Using Petri Nets","authors":"Olivier Baldellon, J. Fabre, Matthieu Roy","doi":"10.1109/SRDS.2012.21","DOIUrl":"https://doi.org/10.1109/SRDS.2012.21","url":null,"abstract":"Supervising a system in operation allows to detect a violation of system specification or temporal properties, and is the first step required by any reconfiguration mechanism. In this work, we focus on run-time verification of temporal system properties in distributed and real-time systems. Based on a description of a property that includes events and temporal constraints, expressed as an arc timed Petri net, we automatically derive a monitoring system responsible for checking this property. The proposed approach enables the distributed verification of system properties. Our contribution is twofold. On the theoretical side, we introduce a slight modification of the semantics of Petri nets to be able to execute it in partial executions and noisy observation environments. On the practical side, we show how to use this formal framework to provide a distributed and efficient monitoring system, and describe its current implementation.","PeriodicalId":447700,"journal":{"name":"2012 IEEE 31st Symposium on Reliable Distributed Systems","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122604553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Distributed Denial of Service (DDoS) attacks are hard to deal with, due to the fact that it is difficult to distinguish legitimate traffic from malicious traffic, especially since the latter is from distributed sources. To accurately filter malicious traffic one needs (strong but costly) packet authentication primitives which increase the design complexity and typically affect throughput. It is a challenge to keep a balance between throughput and security/protection of the network core and end resources. In this paper, we propose SIEVE, a lightweight distributed filtering protocol/method. Depending on the attacker's ability, SIEVE can provide a standalone filter for moderate adversary models and a complementary filter which can enhance the performance of strong and more complex methods for stronger adversary models.
{"title":"Off the Wall: Lightweight Distributed Filtering to Mitigate Distributed Denial of Service Attacks","authors":"Zhang Fu, M. Papatriantafilou","doi":"10.1109/SRDS.2012.45","DOIUrl":"https://doi.org/10.1109/SRDS.2012.45","url":null,"abstract":"Distributed Denial of Service (DDoS) attacks are hard to deal with, due to the fact that it is difficult to distinguish legitimate traffic from malicious traffic, especially since the latter is from distributed sources. To accurately filter malicious traffic one needs (strong but costly) packet authentication primitives which increase the design complexity and typically affect throughput. It is a challenge to keep a balance between throughput and security/protection of the network core and end resources. In this paper, we propose SIEVE, a lightweight distributed filtering protocol/method. Depending on the attacker's ability, SIEVE can provide a standalone filter for moderate adversary models and a complementary filter which can enhance the performance of strong and more complex methods for stronger adversary models.","PeriodicalId":447700,"journal":{"name":"2012 IEEE 31st Symposium on Reliable Distributed Systems","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128785296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mobile ad hoc networks (MANET) are a subset of Delay Tolerant Networks (DTNs) composed of several mobile devices. These dynamic environments makes conventional security algorithms unreliable, nodes that are far apart may not have access to the other's public key, making secure message exchange difficult. Other security methods rely on requesting the key from a trusted third party, which can be unavailable in DTN. The purpose of this paper is to introduce two message security algorithms capable of delivering messages securely against either eavesdropping or manipulation. The first algorithm, Chaining, uses multiple midpoints to re-encrypt the message for the destination node. The second, Fragmenting, separates the message key into pieces that are both routed and secured independently from each other. Both techniques have improved security in hostile environments. This improvement has a performance trade-off, however, reducing the delivery ratio and increasing the delivery time.
{"title":"Three Point Encryption (3PE): Secure Communications in Delay Tolerant Networks","authors":"Roy Cabaniss, Vimal Kumar, S. Madria","doi":"10.1109/SRDS.2012.74","DOIUrl":"https://doi.org/10.1109/SRDS.2012.74","url":null,"abstract":"Mobile ad hoc networks (MANET) are a subset of Delay Tolerant Networks (DTNs) composed of several mobile devices. These dynamic environments makes conventional security algorithms unreliable, nodes that are far apart may not have access to the other's public key, making secure message exchange difficult. Other security methods rely on requesting the key from a trusted third party, which can be unavailable in DTN. The purpose of this paper is to introduce two message security algorithms capable of delivering messages securely against either eavesdropping or manipulation. The first algorithm, Chaining, uses multiple midpoints to re-encrypt the message for the destination node. The second, Fragmenting, separates the message key into pieces that are both routed and secured independently from each other. Both techniques have improved security in hostile environments. This improvement has a performance trade-off, however, reducing the delivery ratio and increasing the delivery time.","PeriodicalId":447700,"journal":{"name":"2012 IEEE 31st Symposium on Reliable Distributed Systems","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131684906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}