B. Sangchoolie, Fatemeh Ayatolahi, R. Johansson, J. Karlsson
In this paper we study the impact of compiler optimizations on the error sensitivity of twelve benchmark programs. We conducted extensive fault injection experiments where bit-flip errors were injected in instruction set architecture registers and main memory locations. The results show that the percentage of silent data corruptions (SDCs) in the output of the optimized programs is only marginally higher compare to that observed for the non-optimized programs. This suggests that compiler optimizations can be used in safety- and mission-critical systems without increasing the risk that the system produces undetected erroneous outputs. In addition, we investigate to what extent the source code implementation of a program affects the error sensitivity of a program. To this end, we perform experiments with five implementations of a bit count algorithm. In this investigation, we consider the impact of the implementation as well as compiler optimizations. The results of these experiments give valuable insights into how compiler optimizations can be used to reduce error sensitive of registers and main memory sections. They also show how sensitive locations requiring additional protection, e.g., by the use of software-based fault tolerance techniques, can be identified.
{"title":"A Study of the Impact of Bit-Flip Errors on Programs Compiled with Different Optimization Levels","authors":"B. Sangchoolie, Fatemeh Ayatolahi, R. Johansson, J. Karlsson","doi":"10.1109/EDCC.2014.30","DOIUrl":"https://doi.org/10.1109/EDCC.2014.30","url":null,"abstract":"In this paper we study the impact of compiler optimizations on the error sensitivity of twelve benchmark programs. We conducted extensive fault injection experiments where bit-flip errors were injected in instruction set architecture registers and main memory locations. The results show that the percentage of silent data corruptions (SDCs) in the output of the optimized programs is only marginally higher compare to that observed for the non-optimized programs. This suggests that compiler optimizations can be used in safety- and mission-critical systems without increasing the risk that the system produces undetected erroneous outputs. In addition, we investigate to what extent the source code implementation of a program affects the error sensitivity of a program. To this end, we perform experiments with five implementations of a bit count algorithm. In this investigation, we consider the impact of the implementation as well as compiler optimizations. The results of these experiments give valuable insights into how compiler optimizations can be used to reduce error sensitive of registers and main memory sections. They also show how sensitive locations requiring additional protection, e.g., by the use of software-based fault tolerance techniques, can be identified.","PeriodicalId":364377,"journal":{"name":"2014 Tenth European Dependable Computing Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116064670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite the commercial success of Location-Based Services (LBS), the sensitivity of the data they manage, specially those concerning the user's location, makes them a suitable target for geo-location inference attacks. These attacks are a new variant of traditional inference attacks aiming at disclosing personal aspects of users' life from their geo-location datasets. Since this threat might dramatically compromise the privacy of users, and so the confidence of LBS, a deeper knowledge of geo-location inference attacks becomes essential to protect LBS. To contribute to this goal, this short paper makes a step forward to model well-known types of geo-location inference attacks as a previous step to quantitatively assess the privacy risk they pose.
{"title":"Geo-Location Inference Attacks: From Modelling to Privacy Risk Assessment (Short Paper)","authors":"Miguel Núñez del Prado Cortez, Jesus Frignal","doi":"10.1109/EDCC.2014.32","DOIUrl":"https://doi.org/10.1109/EDCC.2014.32","url":null,"abstract":"Despite the commercial success of Location-Based Services (LBS), the sensitivity of the data they manage, specially those concerning the user's location, makes them a suitable target for geo-location inference attacks. These attacks are a new variant of traditional inference attacks aiming at disclosing personal aspects of users' life from their geo-location datasets. Since this threat might dramatically compromise the privacy of users, and so the confidence of LBS, a deeper knowledge of geo-location inference attacks becomes essential to protect LBS. To contribute to this goal, this short paper makes a step forward to model well-known types of geo-location inference attacks as a previous step to quantitatively assess the privacy risk they pose.","PeriodicalId":364377,"journal":{"name":"2014 Tenth European Dependable Computing Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133365757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Popov, A. Povyakalo, Vladimir Stankovic, L. Strigini
Despite the widespread adoption of software diversity in some industries, there is still controversy about its benefits for reliability, safety or security. We take the prospective of diversity as a risk reduction strategy, in face of the uncertainty about the dependability levels delivered by software development. We specifically consider the problem faced at the start of a project, when the assessment of potential benefits, however uncertain, must determine the decision whether to adopt diversity. Using probabilistic modelling, we discuss how different application areas require different measures of the effectiveness of diversity for reducing risk. Extreme values of achieved reliability, and especially, in some applications, the likelihood of delivering "effectively fault-free" programs, may be the dominant factor in this effect. Therefore, we cast our analysis in terms of the whole distribution of achieved probabilities of failure per demand, rather than averages, as usually done in past research. This analysis highlights possible and indeed frequent errors in generalizations from experiments, and identifies risk reduction effects that can be proved to derive from independent developments of diverse software versions. Last, we demonstrate that, despite the difficulty of predicting the actual advantages of specific practices for achieving diversity, the practice of "forcing" diversity by explicitly mandating diverse designs, development processes, etc., for different versions, rather than just ensuring separate development, is robust, in terms of worst-case effects, in the face of uncertainty about the reliability that the different methods will achieve in a specific project, a result with direct applicability to practice.
{"title":"Software Diversity as a Measure for Reducing Development Risk","authors":"P. Popov, A. Povyakalo, Vladimir Stankovic, L. Strigini","doi":"10.1109/EDCC.2014.36","DOIUrl":"https://doi.org/10.1109/EDCC.2014.36","url":null,"abstract":"Despite the widespread adoption of software diversity in some industries, there is still controversy about its benefits for reliability, safety or security. We take the prospective of diversity as a risk reduction strategy, in face of the uncertainty about the dependability levels delivered by software development. We specifically consider the problem faced at the start of a project, when the assessment of potential benefits, however uncertain, must determine the decision whether to adopt diversity. Using probabilistic modelling, we discuss how different application areas require different measures of the effectiveness of diversity for reducing risk. Extreme values of achieved reliability, and especially, in some applications, the likelihood of delivering \"effectively fault-free\" programs, may be the dominant factor in this effect. Therefore, we cast our analysis in terms of the whole distribution of achieved probabilities of failure per demand, rather than averages, as usually done in past research. This analysis highlights possible and indeed frequent errors in generalizations from experiments, and identifies risk reduction effects that can be proved to derive from independent developments of diverse software versions. Last, we demonstrate that, despite the difficulty of predicting the actual advantages of specific practices for achieving diversity, the practice of \"forcing\" diversity by explicitly mandating diverse designs, development processes, etc., for different versions, rather than just ensuring separate development, is robust, in terms of worst-case effects, in the face of uncertainty about the reliability that the different methods will achieve in a specific project, a result with direct applicability to practice.","PeriodicalId":364377,"journal":{"name":"2014 Tenth European Dependable Computing Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122570125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Currently, the most efficient approach for solving the NP-hard terminal-pair reliability problem is the Kuo-Lu-Yeh algorithm which applies the technique of Edge Expansion Diagram (EED) coupled with Ordered Binary Decision Diagram (OBDD). In this work we will show that this algorithm can be enhanced significantly by removing redundant biconnected components, which can be done in linear time and without needing additional memory. We empirically evaluated our approach against the original one by means of 24 benchmark networks. In addition, we examined our approach statistically using randomly generated graphs. Our new approach performs significantly better regarding runtime and memory consumption for most of the benchmark networks. For a regular 3x20 grid network we have even achieved a speedup of 464 and the memory consumption goes down to 0.3 percent. Thus, in practice, runtime and memory consumptions are drastically reduced for many "difficult" networks. When applied to networks without redundant biconnected components, there is no memory overhead and the additional runtime is negligible.
{"title":"Improving the Kuo-Lu-Yeh Algorithm for Assessing Two-Terminal Reliability","authors":"Minh Lê, M. Walter, J. Weidendorfer","doi":"10.1109/EDCC.2014.11","DOIUrl":"https://doi.org/10.1109/EDCC.2014.11","url":null,"abstract":"Currently, the most efficient approach for solving the NP-hard terminal-pair reliability problem is the Kuo-Lu-Yeh algorithm which applies the technique of Edge Expansion Diagram (EED) coupled with Ordered Binary Decision Diagram (OBDD). In this work we will show that this algorithm can be enhanced significantly by removing redundant biconnected components, which can be done in linear time and without needing additional memory. We empirically evaluated our approach against the original one by means of 24 benchmark networks. In addition, we examined our approach statistically using randomly generated graphs. Our new approach performs significantly better regarding runtime and memory consumption for most of the benchmark networks. For a regular 3x20 grid network we have even achieved a speedup of 464 and the memory consumption goes down to 0.3 percent. Thus, in practice, runtime and memory consumptions are drastically reduced for many \"difficult\" networks. When applied to networks without redundant biconnected components, there is no memory overhead and the additional runtime is negligible.","PeriodicalId":364377,"journal":{"name":"2014 Tenth European Dependable Computing Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132029895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Specifying a resilience benchmark is a difficult task due to the complexity of the benchmark components and the need for standardization. Existing approaches for benchmark specification, including document-based and program-based approaches, are limited in terms of their scope and in the support they provide to the benchmark users. In this short paper we present the work we are conducting towards the definition of a description language for resilience benchmarks for the domain of satellite simulators.
{"title":"Towards a Resilience Benchmarking Description Language for the Context of Satellite Simulators (Short Paper)","authors":"D. Azevedo, A. Ambrosio, M. Vieira","doi":"10.1109/EDCC.2014.19","DOIUrl":"https://doi.org/10.1109/EDCC.2014.19","url":null,"abstract":"Specifying a resilience benchmark is a difficult task due to the complexity of the benchmark components and the need for standardization. Existing approaches for benchmark specification, including document-based and program-based approaches, are limited in terms of their scope and in the support they provide to the benchmark users. In this short paper we present the work we are conducting towards the definition of a description language for resilience benchmarks for the domain of satellite simulators.","PeriodicalId":364377,"journal":{"name":"2014 Tenth European Dependable Computing Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124879043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Negin Fathollahnejad, E. Villani, R. Pathan, R. Barbosa, J. Karlsson
This paper presents a probabilistic analysis of disagreement for a family of simple synchronous consensus algorithms aimed at solving the 1-of-n selection problem in presence of unrestricted communication failures. In this problem, a set of n nodes are to select one common value among n proposed values. There are two possible outcomes of each node's selection process: decide to select a value or abort. We have disagreement if some nodes select the same value while other nodes decide to abort. Previous research has shown that it is impossible to guarantee agreement among the nodes subjected to an unbounded number of message losses. Our aim is to find decision algorithms for which the probability of disagreement is as low as possible. In this paper, we investigate two different decision criteria, one optimistic and one pessimistic. We assume two communication failure models, symmetric and asymmetric. For symmetric communication failures, we present the closed-form expressions for the probability of disagreement. For asymmetric failures, we analyse the algorithm using a probabilistic model checking tool. Our results show that the choice of decision criterion significantly influences the probability of disagreement for the 1-of-n selection algorithm. The optimistic decision criterion shows a lower probability of disagreement compare to the pessimistic one when the probability of message loss is less than 30% to 70%. On the other hand, the optimistic decision criterion has in general a higher maximum probability of disagreement compared to the pessimistic criterion.
本文对一类简单同步一致性算法进行了不一致的概率分析,以解决存在无限制通信故障时的1 of n选择问题。在这个问题中,一组n个节点要从n个建议值中选择一个公共值。每个节点的选择过程有两种可能的结果:决定选择一个值或中止。如果一些节点选择相同的值,而其他节点决定中止,则会产生分歧。先前的研究表明,不可能保证节点之间的协议受到无限数量的消息丢失。我们的目标是找到分歧概率尽可能低的决策算法。在本文中,我们研究了两种不同的决策准则,一个乐观和一个悲观。我们假设两种通信故障模型,对称和非对称。对于对称通信失败,我们给出了不一致概率的封闭表达式。对于非对称故障,我们使用概率模型检查工具来分析算法。我们的研究结果表明,决策准则的选择对1-of-n选择算法的不一致概率有显著影响。当消息丢失的概率小于30% ~ 70%时,乐观决策准则的不一致概率比悲观决策准则的低。另一方面,与悲观决策标准相比,乐观决策标准通常具有更高的最大分歧概率。
{"title":"On Probabilistic Analysis of Disagreement in Synchronous Consensus Protocols","authors":"Negin Fathollahnejad, E. Villani, R. Pathan, R. Barbosa, J. Karlsson","doi":"10.1109/EDCC.2014.26","DOIUrl":"https://doi.org/10.1109/EDCC.2014.26","url":null,"abstract":"This paper presents a probabilistic analysis of disagreement for a family of simple synchronous consensus algorithms aimed at solving the 1-of-n selection problem in presence of unrestricted communication failures. In this problem, a set of n nodes are to select one common value among n proposed values. There are two possible outcomes of each node's selection process: decide to select a value or abort. We have disagreement if some nodes select the same value while other nodes decide to abort. Previous research has shown that it is impossible to guarantee agreement among the nodes subjected to an unbounded number of message losses. Our aim is to find decision algorithms for which the probability of disagreement is as low as possible. In this paper, we investigate two different decision criteria, one optimistic and one pessimistic. We assume two communication failure models, symmetric and asymmetric. For symmetric communication failures, we present the closed-form expressions for the probability of disagreement. For asymmetric failures, we analyse the algorithm using a probabilistic model checking tool. Our results show that the choice of decision criterion significantly influences the probability of disagreement for the 1-of-n selection algorithm. The optimistic decision criterion shows a lower probability of disagreement compare to the pessimistic one when the probability of message loss is less than 30% to 70%. On the other hand, the optimistic decision criterion has in general a higher maximum probability of disagreement compared to the pessimistic criterion.","PeriodicalId":364377,"journal":{"name":"2014 Tenth European Dependable Computing Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133635706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We investigate how predictive event-based modelling can inform operational decision making in complex systems with component failures. By relating the status of components to service availability, and using stochastic temporal logic reasoning, we quantify the risk of service failure now, and in the future, after a given elapsed time. Decisions can then be taken according to those risks. We demonstrate the approach through application to an industrial case study system in which component failures are sensed and monitored. The system has been deployed for some time. A novel aspect is we calibrate the model(s) according to inferences over historical field data, thus the results of our reasoning can inform decision making in the actual deployed system.
{"title":"Do I Need to Fix a Failed Component Now, or Can I Wait Until Tomorrow?","authors":"M. Calder, Michele Sevegnani","doi":"10.1109/EDCC.2014.15","DOIUrl":"https://doi.org/10.1109/EDCC.2014.15","url":null,"abstract":"We investigate how predictive event-based modelling can inform operational decision making in complex systems with component failures. By relating the status of components to service availability, and using stochastic temporal logic reasoning, we quantify the risk of service failure now, and in the future, after a given elapsed time. Decisions can then be taken according to those risks. We demonstrate the approach through application to an industrial case study system in which component failures are sensed and monitored. The system has been deployed for some time. A novel aspect is we calibrate the model(s) according to inferences over historical field data, thus the results of our reasoning can inform decision making in the actual deployed system.","PeriodicalId":364377,"journal":{"name":"2014 Tenth European Dependable Computing Conference","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126864742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}