As new techniques of fault tolerance and security emerge, so does the need for suitable tools to evaluate them. Generally, the security of a system can be estimated and verified via logical test cases, but the performance overhead of security algorithms on a system needs to be numerically analyzed. The diversity in security methods and design of fault tolerant systems make it impossible for researchers to come up with a standard, affordable and openly available simulation tool, evaluation framework or an experimental test-bed. Therefore, researchers choose from a wide range of available modeling-based, implementation-based or simulation-based approaches in order to evaluate their designs. All of these approaches have certain merits and several drawbacks. For instance, development of a system prototype provides a more accurate system analysis but unlike simulation, it is not highly scalable. This paper presents a multi-step, simulation-based performance evaluation methodology for secure fault tolerant systems. We use a divide-and-conquer approach to model the entire secure system in a way that allows the use of different analytical tools at different levels of granularity. This evaluation procedure tries to strike a balance between the efficiency, effort, cost and accuracy of a system’s performance analysis. We demonstrate this approach in a step-by-step manner by analyzing the performance of a secure and fault tolerant system using a JAVA implementation in conjunction with the ARENA simulation.
{"title":"A Multi-step Simulation Approach toward Secure Fault Tolerant System Evaluation","authors":"Ruchika Mehresh, S. Upadhyaya, K. Kwiat","doi":"10.1109/SRDS.2010.53","DOIUrl":"https://doi.org/10.1109/SRDS.2010.53","url":null,"abstract":"As new techniques of fault tolerance and security emerge, so does the need for suitable tools to evaluate them. Generally, the security of a system can be estimated and verified via logical test cases, but the performance overhead of security algorithms on a system needs to be numerically analyzed. The diversity in security methods and design of fault tolerant systems make it impossible for researchers to come up with a standard, affordable and openly available simulation tool, evaluation framework or an experimental test-bed. Therefore, researchers choose from a wide range of available modeling-based, implementation-based or simulation-based approaches in order to evaluate their designs. All of these approaches have certain merits and several drawbacks. For instance, development of a system prototype provides a more accurate system analysis but unlike simulation, it is not highly scalable. This paper presents a multi-step, simulation-based performance evaluation methodology for secure fault tolerant systems. We use a divide-and-conquer approach to model the entire secure system in a way that allows the use of different analytical tools at different levels of granularity. This evaluation procedure tries to strike a balance between the efficiency, effort, cost and accuracy of a system’s performance analysis. We demonstrate this approach in a step-by-step manner by analyzing the performance of a secure and fault tolerant system using a JAVA implementation in conjunction with the ARENA simulation.","PeriodicalId":219204,"journal":{"name":"2010 29th IEEE Symposium on Reliable Distributed Systems","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126405750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There have been works considering protocols for accessing partitioned data. Most of these works assume the local cluster based environment and their designs target atomic semantics. However, when considering widely distributed cloud storage systems, these existing protocols may not scale well. In this paper, we analyze the requirements of access protocols for storage systems based on data partitioning schemes in widely distributed cloud environments. We consider the regular semantics instead of atomic semantics to improve access efficiency. Then, we develop an access protocol following the requirements to achieve correct and efficient data accesses. Various protocols are compared experimentally and the results show that our protocol yields much better performance than the existing ones.
{"title":"Secure, Dependable, and High Performance Cloud Storage","authors":"Yunqi Ye, Liangliang Xiao, I. Yen, F. Bastani","doi":"10.1109/SRDS.2010.30","DOIUrl":"https://doi.org/10.1109/SRDS.2010.30","url":null,"abstract":"There have been works considering protocols for accessing partitioned data. Most of these works assume the local cluster based environment and their designs target atomic semantics. However, when considering widely distributed cloud storage systems, these existing protocols may not scale well. In this paper, we analyze the requirements of access protocols for storage systems based on data partitioning schemes in widely distributed cloud environments. We consider the regular semantics instead of atomic semantics to improve access efficiency. Then, we develop an access protocol following the requirements to achieve correct and efficient data accesses. Various protocols are compared experimentally and the results show that our protocol yields much better performance than the existing ones.","PeriodicalId":219204,"journal":{"name":"2010 29th IEEE Symposium on Reliable Distributed Systems","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124801131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a technique for topology control (TC) of wireless nodes to meet Quality of Service (QoS) requirements between source and destination node pairs. The nodes are assumed to use a TDMA (Time Division Multiple Access) based MAC (Medium Access Control) layer. Given a set of QoS requirements, a set of wireless nodes and their initial positions, the goal is to find a topology of the nodes by adjusting the transmitting power, which will meet the QoS requirements under the presence of interference and at the same time minimize the energy consumed. The problem of TC is treated like an optimization problem and techniques of Linear Programming (LP) and Genetic Algorithms (GA) are used to solve it. The solution obtained after solving the optimization problem is in the form of optimal routes to be followed between each source, destination node pair. This information is used to construct the optimal topology.
{"title":"Optimization Based Topology Control for Wireless Ad Hoc Networks to Meet QoS Requirements","authors":"K. YaduKishore, Ashish Tiwari, O. Kakde","doi":"10.1109/SRDS.2010.12","DOIUrl":"https://doi.org/10.1109/SRDS.2010.12","url":null,"abstract":"This paper proposes a technique for topology control (TC) of wireless nodes to meet Quality of Service (QoS) requirements between source and destination node pairs. The nodes are assumed to use a TDMA (Time Division Multiple Access) based MAC (Medium Access Control) layer. Given a set of QoS requirements, a set of wireless nodes and their initial positions, the goal is to find a topology of the nodes by adjusting the transmitting power, which will meet the QoS requirements under the presence of interference and at the same time minimize the energy consumed. The problem of TC is treated like an optimization problem and techniques of Linear Programming (LP) and Genetic Algorithms (GA) are used to solve it. The solution obtained after solving the optimization problem is in the form of optimal routes to be followed between each source, destination node pair. This information is used to construct the optimal topology.","PeriodicalId":219204,"journal":{"name":"2010 29th IEEE Symposium on Reliable Distributed Systems","volume":" 21","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114051284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traffic signals are an elementary component of all urban road networks and play a critical role in controlling the flow of vehicles. However, current road transportation systems and traffic signal implementations are very inefficient. The objective of this research is to evaluate optimal phase ordering within a signal cycles to minimize the average waiting delay and thus in turn minimizing fuel consumption and greenhouse gas (GHG) emissions. Through extensive simulation analysis, we show that by choosing optimal phase ordering, the stopped delay can be reduced by 40% per car at each signal resulting in a saving of up to 100 gallons of fuel per traffic signal each day.
{"title":"On Optimizing Traffic Signal Phase Ordering in Road Networks","authors":"J. Barnes, V. Paruchuri, S. Chellappan","doi":"10.1109/SRDS.2010.42","DOIUrl":"https://doi.org/10.1109/SRDS.2010.42","url":null,"abstract":"Traffic signals are an elementary component of all urban road networks and play a critical role in controlling the flow of vehicles. However, current road transportation systems and traffic signal implementations are very inefficient. The objective of this research is to evaluate optimal phase ordering within a signal cycles to minimize the average waiting delay and thus in turn minimizing fuel consumption and greenhouse gas (GHG) emissions. Through extensive simulation analysis, we show that by choosing optimal phase ordering, the stopped delay can be reduced by 40% per car at each signal resulting in a saving of up to 100 gallons of fuel per traffic signal each day.","PeriodicalId":219204,"journal":{"name":"2010 29th IEEE Symposium on Reliable Distributed Systems","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129646325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In mobile peer-to-peer networks, proposed service discovery protocols disregard the exposure of the participating peers’ privacy details (privileged information). In these methods, the participating peers must provide their identities during the service discovery process to be authorized to utilize services. However, a peer may not be willing to reveal its privileged information until it identifies the service providing peer. So these peers face a problem, should the service requesting or the service providing peer reveal the identity first. The protocol presented in [12] solves this problem to some extent and discover the services available in the service requester’s vicinity in a single-hop time sync peers only. In this paper, we propose a privacy-preserving model based on challenged/response idea to discover the services available in the mobile peer-to-peer network even when the moving service requester and the service provider are at a multi-hop distance away. The performance studies shows that our protocol does preserve the privacy in a communication efficient way with reduced false positives in comparison to one other recently proposed protocol.
{"title":"PrEServD - Privacy Ensured Service Discovery in Mobile Peer-to-Peer Networks","authors":"Santhosh Muthyapu, S. Madria, M. Linderman","doi":"10.1109/SRDS.2010.9","DOIUrl":"https://doi.org/10.1109/SRDS.2010.9","url":null,"abstract":"In mobile peer-to-peer networks, proposed service discovery protocols disregard the exposure of the participating peers’ privacy details (privileged information). In these methods, the participating peers must provide their identities during the service discovery process to be authorized to utilize services. However, a peer may not be willing to reveal its privileged information until it identifies the service providing peer. So these peers face a problem, should the service requesting or the service providing peer reveal the identity first. The protocol presented in [12] solves this problem to some extent and discover the services available in the service requester’s vicinity in a single-hop time sync peers only. In this paper, we propose a privacy-preserving model based on challenged/response idea to discover the services available in the mobile peer-to-peer network even when the moving service requester and the service provider are at a multi-hop distance away. The performance studies shows that our protocol does preserve the privacy in a communication efficient way with reduced false positives in comparison to one other recently proposed protocol.","PeriodicalId":219204,"journal":{"name":"2010 29th IEEE Symposium on Reliable Distributed Systems","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129130048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Node mobility causes fading wireless channels, which in turn renders topology changes in emerging wireless ad hoc networks. In this paper, on-line estimators and Markov models are utilized to estimate fading channel conditions. Using the estimated channel conditions as well as queue occupancy, available energy and link delay, approximate dynamic programming (ADP) techniques are utilized to find dynamic routes, while solving discrete-time Hamilton-Jacobi-Bellman (HJB) equation forward-in-time for route cost in multichannel multi-interface networks. The performance of the proposed load balancing method in the presence of fading channels and the performance of the optimal route selection approach for multi-channel multi-interface wireless ad hoc network is evaluated by extensive simulations and comparing it to AODV.
{"title":"Adaptive Routing Scheme for Emerging Wireless Ad Hoc Networks","authors":"Behdis Eslamnour, S. Jagannathan","doi":"10.1109/SRDS.2010.44","DOIUrl":"https://doi.org/10.1109/SRDS.2010.44","url":null,"abstract":"Node mobility causes fading wireless channels, which in turn renders topology changes in emerging wireless ad hoc networks. In this paper, on-line estimators and Markov models are utilized to estimate fading channel conditions. Using the estimated channel conditions as well as queue occupancy, available energy and link delay, approximate dynamic programming (ADP) techniques are utilized to find dynamic routes, while solving discrete-time Hamilton-Jacobi-Bellman (HJB) equation forward-in-time for route cost in multichannel multi-interface networks. The performance of the proposed load balancing method in the presence of fading channels and the performance of the optimal route selection approach for multi-channel multi-interface wireless ad hoc network is evaluated by extensive simulations and comparing it to AODV.","PeriodicalId":219204,"journal":{"name":"2010 29th IEEE Symposium on Reliable Distributed Systems","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114998006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data aggregation scheduling, or converge cast, is a fundamental pattern of communication in wireless sensor networks (WSNs), where sensor nodes aggregate and relay data to a sink node. For WSN applications that require fast response time, it is imperative that the data reaches the sink as fast as possible. For such timeliness guarantees, TDMA-based scheduling can be used to assign time slots to nodes in which they can transmit messages. However, any slot assignment approach needs to be cognisant of the fact that crash failures can occur (e.g., due to battery exhaustion, defective hardware). In this paper, we study the design of such data aggregation scheduling (converge cast) protocols. We make the following contributions: (i) we identify a necessary condition to solve the converge cast problem, (ii) we introduce two versions of the converge cast problem, namely (a) a strong version, and (b) a weak version , (iii) we show that the strong converge cast problem cannot be solved, (iv) we show that deterministic weak converge cast cannot be solved in presence of crash failures, (v) we show that there is no $1$-local algorithm that solves stabilising weak converge cast in presence of crash failures, (vi) we provide a modular $d$-local algorithm that solves stabilising weak converge cast in presence of crash failures where $d$ is the network radius, and (vii) we show how specific instantiations of parameters can lead to an $d$-local algorithm that achieves more efficient stabilization. Our contributions are novel: (i) the first contribution (necessary condition) provides the theoretical basis which explains the structure of existing converge cast algorithms, and (ii) the study of converge cast in presence of crash failures has not previously been studied.
{"title":"Crash-Tolerant Collision-Free Data Aggregation Scheduling for Wireless Sensor Networks","authors":"A. Jhumka","doi":"10.1109/SRDS.2010.14","DOIUrl":"https://doi.org/10.1109/SRDS.2010.14","url":null,"abstract":"Data aggregation scheduling, or converge cast, is a fundamental pattern of communication in wireless sensor networks (WSNs), where sensor nodes aggregate and relay data to a sink node. For WSN applications that require fast response time, it is imperative that the data reaches the sink as fast as possible. For such timeliness guarantees, TDMA-based scheduling can be used to assign time slots to nodes in which they can transmit messages. However, any slot assignment approach needs to be cognisant of the fact that crash failures can occur (e.g., due to battery exhaustion, defective hardware). In this paper, we study the design of such data aggregation scheduling (converge cast) protocols. We make the following contributions: (i) we identify a necessary condition to solve the converge cast problem, (ii) we introduce two versions of the converge cast problem, namely (a) a strong version, and (b) a weak version , (iii) we show that the strong converge cast problem cannot be solved, (iv) we show that deterministic weak converge cast cannot be solved in presence of crash failures, (v) we show that there is no $1$-local algorithm that solves stabilising weak converge cast in presence of crash failures, (vi) we provide a modular $d$-local algorithm that solves stabilising weak converge cast in presence of crash failures where $d$ is the network radius, and (vii) we show how specific instantiations of parameters can lead to an $d$-local algorithm that achieves more efficient stabilization. Our contributions are novel: (i) the first contribution (necessary condition) provides the theoretical basis which explains the structure of existing converge cast algorithms, and (ii) the study of converge cast in presence of crash failures has not previously been studied.","PeriodicalId":219204,"journal":{"name":"2010 29th IEEE Symposium on Reliable Distributed Systems","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124212686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data aggregation is a fundamental building block of modern distributed systems. Averaging based approaches, commonly designated gossip-based, are an important class of aggregation algorithms as they allow all nodes to produce a result, converge to any required accuracy, and work independently from the network topology. However, existing approaches exhibit many dependability issues when used in faulty and dynamic environments. This paper extends our own technique, Flow Updating, which is immune to message loss, to operate in dynamic networks, improving its fault tolerance characteristics. Experimental results show that the novel version of Flow Updating vastly outperforms previous averaging algorithms, it self adapts to churn without requiring any periodic restart, supporting node crashes and high levels of message loss.
{"title":"Fault-Tolerant Aggregation for Dynamic Networks","authors":"Paulo Jesus, Carlos Baquero, Paulo Sérgio Almeida","doi":"10.1109/SRDS.2010.13","DOIUrl":"https://doi.org/10.1109/SRDS.2010.13","url":null,"abstract":"Data aggregation is a fundamental building block of modern distributed systems. Averaging based approaches, commonly designated gossip-based, are an important class of aggregation algorithms as they allow all nodes to produce a result, converge to any required accuracy, and work independently from the network topology. However, existing approaches exhibit many dependability issues when used in faulty and dynamic environments. This paper extends our own technique, Flow Updating, which is immune to message loss, to operate in dynamic networks, improving its fault tolerance characteristics. Experimental results show that the novel version of Flow Updating vastly outperforms previous averaging algorithms, it self adapts to churn without requiring any periodic restart, supporting node crashes and high levels of message loss.","PeriodicalId":219204,"journal":{"name":"2010 29th IEEE Symposium on Reliable Distributed Systems","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133815252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lásaro J. Camargos, F. Pedone, A. Pilchin, M. Wieloch
This paper presents a recovery architecture for in-memory data management systems. Recovery in such systems boils down to solving two problems: retrieving and installing the last committed image of the crashed database on a new server and replaying the updates missing from the image. We improve recovery time with a novel technique called On-Demand Recovery, which removes the need to replay all missing updates before new transactions can be accepted. We have implemented and thoroughly evaluated the technique. We show in the paper that in some cases On-Demand Recovery can reduce recovery time by more than 50%.
{"title":"On-Demand Recovery in Middleware Storage Systems","authors":"Lásaro J. Camargos, F. Pedone, A. Pilchin, M. Wieloch","doi":"10.1109/SRDS.2010.31","DOIUrl":"https://doi.org/10.1109/SRDS.2010.31","url":null,"abstract":"This paper presents a recovery architecture for in-memory data management systems. Recovery in such systems boils down to solving two problems: retrieving and installing the last committed image of the crashed database on a new server and replaying the updates missing from the image. We improve recovery time with a novel technique called On-Demand Recovery, which removes the need to replay all missing updates before new transactions can be accepted. We have implemented and thoroughly evaluated the technique. We show in the paper that in some cases On-Demand Recovery can reduce recovery time by more than 50%.","PeriodicalId":219204,"journal":{"name":"2010 29th IEEE Symposium on Reliable Distributed Systems","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114676144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Partial replication is a way to increase the scalability of replicated systems: updates only need to be applied to a subset of the system's sites, thus allowing replicas to handle independent parts of the workload in parallel. In this paper, we propose P-Store, a partially replicated key-value store for wide area networks. In P-Store, each transaction T optimistically executes on one or more sites and is then certified to guarantee serializability of the execution. The certification protocol is genuine, it only involves sites that replicate data items read or written by T, and incorporates a mechanism to minimize a convoy effect. P-Store makes a thrifty use of an atomic multicast service to guarantee correctness: no messages need to be multicast during T's execution and a single message is multicast to certify T. In case T is global, that is, T's execution is distributed at different geographical locations, an extra vote phase is required. Our approach may offer better scalability than previously proposed solutions that either require multiple atomic multicast messages to execute T or are non-genuine. Experimental evaluations reveal that the convoy effect plays an important role even when one percent of the transactions are global. We also compare the scalability of our approach to a fully replicated solution when the proportion of global transactions and the number of sites vary.
{"title":"P-Store: Genuine Partial Replication in Wide Area Networks","authors":"Nicolas Schiper, P. Sutra, F. Pedone","doi":"10.1109/SRDS.2010.32","DOIUrl":"https://doi.org/10.1109/SRDS.2010.32","url":null,"abstract":"Partial replication is a way to increase the scalability of replicated systems: updates only need to be applied to a subset of the system's sites, thus allowing replicas to handle independent parts of the workload in parallel. In this paper, we propose P-Store, a partially replicated key-value store for wide area networks. In P-Store, each transaction T optimistically executes on one or more sites and is then certified to guarantee serializability of the execution. The certification protocol is genuine, it only involves sites that replicate data items read or written by T, and incorporates a mechanism to minimize a convoy effect. P-Store makes a thrifty use of an atomic multicast service to guarantee correctness: no messages need to be multicast during T's execution and a single message is multicast to certify T. In case T is global, that is, T's execution is distributed at different geographical locations, an extra vote phase is required. Our approach may offer better scalability than previously proposed solutions that either require multiple atomic multicast messages to execute T or are non-genuine. Experimental evaluations reveal that the convoy effect plays an important role even when one percent of the transactions are global. We also compare the scalability of our approach to a fully replicated solution when the proportion of global transactions and the number of sites vary.","PeriodicalId":219204,"journal":{"name":"2010 29th IEEE Symposium on Reliable Distributed Systems","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115643065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}